Complexity theory is a topic I’ve previously been exposed to through various channels; examples include Institute for Advanced Studies comp sci lectures, notes included in a few computer science-related books like Louridas and Dasgupta, and probably also e.g. some of the systems analysis/-science books I’ve read – Konieczny et al.’s text which I recently finished reading is another example of a book which peripherally covers content also covered in this book. Holland’s book pretty much doesn’t cover computational complexity theory at all, but some knowledge of computer science will probably still be useful as e.g. concepts from graph theory are touched upon/applied in the coverage; I am also aware that I derived some benefit while reading this book from having previously spent time on signalling models in microeconomics, as there were conceptual similarities between those models and their properties and some of the stuff Holland includes. I’m not really sure if you need to know ‘anything’ to read the book and get something out of it, but although Holland doesn’t use much mathematical formalism some of the ‘hidden’ formalism lurking in the background will probably not be easy to understand if you e.g. haven’t seen a mathematical equation since the 9th grade, and people who e.g. have seen hierarchical models before will definitely have a greater appreciation of some of the material covered than people who have not. Obviously I’ve read a lot of stuff over time that made the book easier for me to read and understand than it otherwise would have been, but how easy would the book have been for me to read if I hadn’t read those other things? It’s really difficult for me to say. I found the book hard to judge/rate/evaluate, so I decided against rating it on goodreads.

Below I have added some quotes from the book.

“[C]omplex systems exhibits a distinctive property called emergence, roughly described by the common phrase ‘the action of the whole is more than the sum of the actions of the parts’. In addition to complex systems, there is a subfield of computer science, called computational complexity, which concerns itself with the difficulty of solving different kinds of problems. […] The object of the computational complexity subfield is to assign levels of difficulty — levels of complexity — to different collections of problems. There are intriguing conjectures about these levels of complexity, but an understanding of the theoretical framework requires a substantial background in theoretical computer science — enough to fill an entire book in this series. For this reason, and because computational complexity does not touch upon emergence, I will confine this book to systems and the ways in which they exhibit emergence. […] emergent behaviour is an essential requirement for calling a system ‘complex’. […] Hierarchical organization is […] closely tied to emergence. Each level of a hierarchy typically is governed by its own set of laws. For example, the laws of the periodic table govern the combination of hydrogen and oxygen to form H2O molecules, while the laws of fluid flow (such as the Navier-Stokes equations) govern the behaviour of water. The laws of a new level must not violate the laws of earlier levels — that is, the laws at lower levels constrain the laws at higher levels. […] Restated for complex systems: emergent properties at any level must be consistent with interactions specified at the lower level(s). […] Much of the motivation for treating a system as complex is to get at questions that would otherwise remain inaccessible. Often the first steps in acquiring a deeper understanding are through comparisons of similar systems. By treating hierarchical organization as sine qua non for complexity we focus on the interactions of emergent properties at various levels. The combination of ‘top–down’ effects (as when the daily market average affects actions of the buyers and sellers in an equities market) and ‘bottom–up’ effects (the interactions of the buyers and sellers determine the market average) is a pervasive feature of complex systems. The present exposition, then, centres on complex systems where emergence, and the reduction(s) involved, offer a key to new kinds of understanding.”

“As the field of complexity studies has developed, it has split into two subfields that examine two different kinds of emergence: the study of complex physical systems (CPS) and the study of complex adaptive systems (CAS): The study of complex physical systems focuses on geometric (often lattice-like) arrays of elements, in which interactions typically depend only on effects propagated from nearest neighbours. […] the study of CPS has a distinctive set of tools and questions centring on elements that have fixed properties – atoms, the squares of the cellular automaton, and the like. […] The tools used for studying CPS come, with rare exceptions, from a well-developed part of mathematics, the theory of partial differential equations […] CAS studies, in contrast to CPS studies, concern themselves with elements that are not fixed. The elements, usually called agents, learn or adapt in response to interactions with other agents. […] It is unusual for CAS agents to converge, even momentarily, to a single ‘optimal’ strategy, or to an equilibrium. As the agents adapt to each other, new agents with new strategies usually emerge. Then each new agent offers opportunities for still further interactions, increasing the overall complexity. […] The complex feedback loops that form make it difficult to analyse, or even describe, CAS. […] Analysis of complex systems almost always turns on finding recurrent patterns in the system’s ever-changing configurations. […] perpetual novelty, produced with a limited number of rules or laws, is a characteristic of most complex systems: DNA consists of strings of the same four nucleotides, yet no two humans are exactly alike; the theorems of Euclidian geometry are based on just five axioms, yet new theorems are still being derived after two millenia; and so it is for the other complex systems.”

“In a typical physical system the whole is (at least approximately) the sum of the parts, making the use of PDEs straightforward for a mathematician, but in a typical generated system the parts are put together in an interconnected, non-additive way. It is possible to write a concise set of partial differential equations to describe the basic elements of a computer, say an interconnected set of binary counters, but the existing theory of PDEs does little to increase our understanding of the circuits so-described. The formal grammar approach, in contrast, has already considerably increased our understanding of computer languages and programs. One of the major tasks of this book is to use a formal grammar to convert common features of complex systems into ‘stylized facts’ that can be examined carefully within the grammar.”

“Many CPS problems (e.g. the flow of electrons in superconductive materials) […] involve flows — flows that are nicely described by networks. Networks provide a detailed snapshot of CPS and complex adaptive systems (CAS) interactions at any given point in their development, but there are few studies of the evolution of networks […]. The distinction between the fast dynamic of flows (change of state) and the slow dynamic of adaptation (change of the network of interactions) often distinguishes CPS studies from CAS studies. […] all well-studied CAS exhibit lever points, points where a small directed action causes large predictable changes in aggregate behaviour, as when a vaccine produces long-term changes in an immune system. At present, lever points are almost always located by trial and error. However, by extracting mechanisms common to different lever points, a relevant CAS theory would provide a principled way of locating and testing lever points. […] activities that are easy to observe in one complex system often suggest ‘where to look’ in other complex systems where the activities are difficult to observe.”

“Observation shows that agents acting in a niche continually undergo ‘improvements’, without ever completely outcompeting other agents in the community. These improvements may come about in either of two ways: (i) an agent may become more of a generalist, processing resources from a wider variety of sources, or (ii) it may become more specialized, becoming more efficient than its competitors at exploiting a particular source of a vital resource. Both changes allow for still more interactions and still greater diversity. […] All CAS that have been examined closely exhibit trends toward increasing numbers of specialists.”

“Emergence is tightly tied to the formation of boundaries. These boundaries can arise from symmetry breaking, […] or they can arise by assembly of component building blocks […]. For CAS, the agent-defining boundaries determine the interactions between agents. […] Adaptation, and the emergence of new kinds of agents, then arises from changes in the relevant boundaries. Typically, a boundary only looks to a small segment of a signal, a tag, to determine whether or not the signal can pass through the boundary. […] an agent can be modelled by a set of conditional IF/THEN rules that represent both the effects of boundaries and internal signal-processing. Because tags are short, a given signal may carry multiple tags, and the rules that process signals can require the presence of more than one tag for the processing to proceed. Agents are parallel processors in the sense that all rules that are satisfied simultaneously in the agent are executed simultaneously. As a result, the interior of an agent will usually be filled with multiple signals […]. The central role of tags in routing signals through this complex interior puts emphasis on the mechanisms for tag modification as a means of adaptation. Recombination of extant conditions and signals […] turns tags into building blocks for specifying new routes. Parallel processing then makes it possible to test new routes so formed without seriously disrupting extant useful routes. Sophisticated agents have another means of adaptation: anticipation (‘lookahead’). If an agent has a set of rules that simulates part of its world, then it can run this internal model to examine the outcomes of different action sequences before those actions are executed.”

“The flow of signals within and between agents can be represented by a directed network, where nodes represent rules, and there is a connection from node x to node y if rule x sends a signal satisfying a condition of rule y. Then, the flow of signals over this network spells out the performance of the agent at a point in time. […] The networks associated with CAS are typically highly tangled, with many loops providing feedback and recirculation […]. An agent adapts by changing its signal-processing rules, with corresponding changes in the structure of the associated network. […] Most machine-learning models, including ‘artificial neural networks’ and ‘Bayesian networks’, lack feedback cycles — they are often called ‘feedforward networks’ (in contrast to networks with substantial feedback). In the terms used in Chapter 4, such networks have no ‘recirculation’ and hence have no autonomous subsystems. Networks with substantial numbers of cycles are difficult to analyse, but a large number of cycles is the essential requirement for the autonomous internal models that make lookahead and planning possible. […] The complexities introduced by loops have so far resisted most attempts at analysis. […] The difficulties of analysing the behaviour of networks with many interior loops has, both historically and currently, encouraged the study of networks without loops called trees. Trees occur naturally in the study of games. […] because trees are easier to analyse, most artificial neural networks constructed for pattern recognition are trees. […] Evolutionary game theory makes use of the tree structure of games to study the ways in which agents can modify their strategies as they interact with other agents playing the same game. […] However, evolutionary game theory does not concern itself with the evolution of the game’s laws.”

“It has been observed that innovation in CAS is mostly a matter of combining well-known components in new ways. […] Recombination abets the formation of new cascades. […] By extracting general mechanisms that modify CAS, such as recombination, we go from examination of particular instances to a unified study of characteristic CAS properties. The mechanisms of interest act mainly on extant substructures, using them as building blocks for more complex substructures […]. Because signals and boundaries are a pervasive feature of CAS, their modification has a central role in this adaptive process.”


February 12, 2018 Posted by | Books, Computer science, Mathematics | Leave a comment

Systems Biology (II)

Some observations from the book’s chapter 3 below:

“Without regulation biological processes would become progressively more and more chaotic. In living cells the primary source of information is genetic material. Studying the role of information in biology involves signaling (i.e. spatial and temporal transfer of information) and storage (preservation of information). Regarding the role of the genome we can distinguish three specific aspects of biological processes: steady-state genetics, which ensure cell-level and body homeostasis; genetics of development, which controls cell differentiation and genesis of the organism; and evolutionary genetics, which drives speciation. […] The ever growing demand for information, coupled with limited storage capacities, has resulted in a number of strategies for minimizing the quantity of the encoded information that must be preserved by living cells. In addition to combinatorial approaches based on noncontiguous genes structure, self-organization plays an important role in cellular machinery. Nonspecific interactions with the environment give rise to coherent structures despite the lack of any overt information store. These mechanisms, honed by evolution and ubiquitous in living organisms, reduce the need to directly encode large quantities of data by adopting a systemic approach to information management.”

Information is commonly understood as a transferable description of an event or object. Information transfer can be either spatial (communication, messaging or signaling) or temporal (implying storage). […] The larger the set of choices, the lower the likelihood [of] making the correct choice by accident and — correspondingly — the more information is needed to choose correctly. We can therefore state that an increase in the cardinality of a set (the number of its elements) corresponds to an increase in selection indeterminacy. This indeterminacy can be understood as a measure of “a priori ignorance”. […] Entropy determines the uncertainty inherent in a given system and therefore represents the relative difficulty of making the correct choice. For a set of possible events it reaches its maximum value if the relative probabilities of each event are equal. Any information input reduces entropy — we can therefore say that changes in entropy are a quantitative measure of information. […] Physical entropy is highest in a state of equilibrium, i.e. lack of spontaneity (G = 0,0) which effectively terminates the given reaction. Regulatory processes which counteract the tendency of physical systems to reach equilibrium must therefore oppose increases in entropy. It can be said that a steady inflow of information is a prerequisite of continued function in any organism. As selections are typically made at the entry point of a regulatory process, the concept of entropy may also be applied to information sources. This approach is useful in explaining the structure of regulatory systems which must be “designed” in a specific way, reducing uncertainty and enabling accurate, error-free decisions.

The fire ant exudes a pheromone which enables it to mark sources of food and trace its own path back to the colony. In this way, the ant conveys pathing information to other ants. The intensity of the chemical signal is proportional to the abundance of the source. Other ants can sense the pheromone from a distance of several (up to a dozen) centimeters and thus locate the source themselves. […] As can be expected, an increase in the entropy of the information source (i.e. the measure of ignorance) results in further development of regulatory systems — in this case, receptors capable of receiving signals and processing them to enable accurate decisions. Over time, the evolution of regulatory mechanisms increases their performance and precision. The purpose of various structures involved in such mechanisms can be explained on the grounds of information theory. The primary goal is to select the correct input signal, preserve its content and avoid or eliminate any errors.”

Genetic information stored in nucleotide sequences can be expressed and transmitted in two ways:
a. via replication (in cell division);
b. via transcription and translation (also called gene expression […]
Both processes act as effectors and can be triggered by certain biological signals transferred on request.
Gene expression can be defined as a sequence of events which lead to the synthesis of proteins or their products required for a particular function. In cell division, the goal of this process is to generate a copy of the entire genetic code (S phase), whereas in gene expression only selected fragments of DNA (those involved in the requested function) are transcribed and translated. […] Transcription calls for exposing a section of the cell’s genetic code and although its product (RNA) is short-lived, it can be recreated on demand, just like a carbon copy of a printed text. On the other hand, replication affects the entire genetic material contained in the cell and must conform to stringent precision requirements, particularly as the size of the genome increases.”

The magnitude of effort involved in replication of genetic code can be visualized by comparing the DNA chain to a zipper […]. Assuming that the zipper consists of three pairs of interlocking teeth per centimeter (300 per meter) and that the human genome is made up of 3 billion […] base pairs, the total length of our uncoiled DNA in “zipper form” would be equal to […] 10,000 km […] If we were to unfasten the zipper at a rate of 1 m per second, the entire unzipping process would take approximately 3 months […]. This comparison should impress upon the reader the length of the DNA chain and the precision with which individual nucleotides must be picked to ensure that the resulting code is an exact copy of the source. It should also be noted that for each base pair the polymerase enzyme needs to select an appropriate matching nucleotide from among four types of nucleotides present in the solution, and attach it to the chain (clearly, no such problem occurs in zippers). The reliability of an average enzyme is on the order of 10-3–10-4, meaning that one error occurs for every 1,000–10,000 interactions between the enzyme and its substrate. Given this figure, replication of 3*109 base pairs would introduce approximately 3 million errors (mutations) per genome, resulting in a highly inaccurate copy. Since the observed reliability of replication is far higher, we may assume that some corrective mechanisms are involved. Really, the remarkable precision of genetic replication is ensured by DNA repair processes, and in particular by the corrective properties of polymerase itself.

Many mutations are caused by the inherent chemical instability of nucleic acids: for example, cytosine may spontaneously convert to uracil. In the human genome such an event occurs approximately 100 times per day; however uracil is not normally encountered in DNA and its presence alerts defensive mechanisms which correct the error. Another type of mutation is spontaneous depurination, which also triggers its own, dedicated error correction procedure. Cells employ a large number of corrective mechanisms […] DNA repair mechanisms may be treated as an “immune system” which protects the genome from loss or corruption of genetic information. The unavoidable mutations which sometimes occur despite the presence of error correction-mechanisms can be masked due to doubled presentation (alleles) of genetic information. Thus, most mutations are recessive and not expressed in the phenotype. As the length of the DNA chain increases, mutations become more probable. It should be noted that the number of nucleotides in DNA is greater than the relative number of aminoacids participating in polypeptide chains. This is due to the fact that each aminoacid is encoded by exactly three nucleotides — a general principle which applies to all living organisms. […] Fidelity is, of course, fundamentally important in DNA replication as any harmful mutations introduced in its course are automatically passed on to all successive generations of cells. In contrast, transcription and translation processes can be more error-prone as their end products are relatively short-lived. Of note is the fact that faulty transcripts appear in relatively low quantities and usually do not affect cell functions, since regulatory processes ensure continued synthesis of the required substances until a suitable level of activity is reached. Nevertheless, it seems that reliable transcription of genetic material is sufficiently significant for cells to have developed appropriate proofreading mechanisms, similar to those which assist replication. […] the entire information pathway — starting with DNA and ending with active proteins — is protected against errors. We can conclude that fallibility is an inherent property of genetic information channels, and that in order to perform their intended function, these channels require error correction mechanisms.”

The discrete nature of genetic material is an important property which distinguishes prokaryotes from eukaryotes. […] The ability to select individual nucleotide fragments and construct sequences from predetermined “building blocks” results in high adaptability to environmental stimuli and is a fundamental aspect of evolution. The discontinuous nature of genes is evidenced by the presence of fragments which do not convey structural information (introns), as opposed to structure-encoding fragments (exons). The initial transcript (pre-mRNA) contains introns as well as exons. In order to provide a template for protein synthesis, it must undergo further processing (also known as splicing): introns must be cleaved and exon fragments attached to one another. […] Recognition of intron-exon boundaries is usually very precise, while the reattachment of adjacent exons is subject to some variability. Under certain conditions, alternative splicing may occur, where the ordering of the final product does not reflect the order in which exon sequences appear in the source chain. This greatly increases the number of potential mRNA combinations and thus the variety of resulting proteins. […] While access to energy sources is not a major problem, sources of information are usually far more difficult to manage — hence the universal tendency to limit the scope of direct (genetic) information storage. Reducing the length of genetic code enables efficient packing and enhances the efficiency of operations while at the same time decreasing the likelihood of errors. […] The number of genes identified in the human genome is lower than the number of distinct proteins by a factor of 4; a difference which can be attributed to alternative splicing. […] This mechanism increases the variety of protein structures without affecting core information storage, i.e. DNA sequences. […] Primitive organisms often possess nearly as many genes as humans, despite the essential differences between both groups. Interspecies diversity is primarily due to the properties of regulatory sequences.”

The discontinuous nature of genes is evolutionarily advantageous but comes at the expense of having to maintain a nucleus where such splicing processes can be safely conducted, in addition to efficient transport channels allowing transcripts to penetrate the nuclear membrane. While it is believed that at early stages of evolution RNA was the primary repository of genetic information, its present function can best be described as an information carrier. Since unguided proteins cannot ensure sufficient specificity of interaction with nucleic acids, protein-RNA complexes are used often in cases where specific fragments of genetic information need to be read. […] The use of RNA in protein complexes is common across all domains of the living world as it bridges the gap between discrete and continuous storage of genetic information.”

Epigenetic differentiation mechanisms are particularly important in embryonic development. […] Unlike the function of mature organisms, embryonic programming refers to structures which do not yet exist but which need to be created through cell proliferation and differentiation. […] Differentiation of cells results in phenotypic changes. This phenomenon is the primary difference between development genetics and steady-state genetics. Functional differences are not, however, associated with genomic changes: instead they are mediated by the transcriptome where certain genes are preferentially selected for transcription while others are suppressed. […] In a mature, specialized cell only a small portion of the transcribable genome is actually expressed. The remainder of the cell’s genetic material is said to be silenced. Gene silencing is a permanent condition. Under normal circumstances mature cells never alter their function, although such changes may be forced in a laboratory setting […] Cells which make up the embryo at a very early stage of development are pluripotent, meaning that their purpose can be freely determined and that all of their genetic information can potentially be expressed (under certain conditions). […] At each stage of the development process the scope of pluripotency is reduced until, ultimately, the cell becomes monopotent. Monopotency implies that the final function of the cell has already been determined, although the cell itself may still be immature. […] functional dissimilarities between specialized cells are not associated with genetic mutations but rather with selective silencing of genes. […] Most genes which determine biological functions have a biallelic representation (i.e. a representation consisting of two alleles). The remainder (approximately 10 % of genes) is inherited from one specific parent, as a result of partial or complete silencing of their sister alleles (called paternal or maternal imprinting) which occurs during gametogenesis. The suppression of a single copy of the X chromosome is a special case of this phenomenon.”

Evolutionary genetics is subject to two somewhat contradictory criteria. On the one hand, there is clear pressure on accurate and consistent preservation of biological functions and structures while on the other hand it is also important to permit gradual but persistent changes. […] the observable progression of adaptive traits which emerge as a result of evolution suggests a mechanism which promotes constructive changes over destructive ones. Mutational diversity cannot be considered truly random if it is limited to certain structures or functions. […] Approximately 50 % of the human genome consists of mobile segments, capable of migrating to various positions in the genome. These segments are called transposons and retrotransposons […] The mobility of genome fragments not only promotes mutations (by increasing the variability of DNA) but also affects the stability and packing of chromatin strands wherever such mobile sections are reintegrated with the genome. Under normal circumstances the activity of mobile sections is tempered by epigenetic mechanisms […]; however in certain situations gene mobility may be upregulated. In particular, it seems that in “prehistoric” (remote evolutionary) times such events occurred at a much faster pace, accelerating the rate of genetic changes and promoting rapid evolution. Cells can actively promote mutations by way of the so-called AID process (activity-dependent cytosine deamination). It is an enzymatic mechanism which converts cytosine into uracil, thereby triggering repair mechanisms and increasing the likelihood of mutations […] The existence of AID proves that cells themselves may trigger evolutionary changes and that the role of mutations in the emergence of new biological structures is not strictly passive.”

Regulatory mechanisms which receive signals characterized by high degrees of uncertainty, must be able to make informed choices to reduce the overall entropy of the system they control. This property is usually associated with development of information channels. Special structures ought to be exposed within information channels connecting systems of different character as for example linking transcription to translation or enabling transduction of signals through the cellular membrane. Examples of structures which convey highly entropic information are receptor systems associated with blood coagulation and immune responses. The regulatory mechanism which triggers an immune response relies on relatively simple effectors (complement factor enzymes, phages and killer cells) coupled to a highly evolved receptor system, represented by specific antibodies and organized set of cells. Compared to such advanced receptors the structures which register the concentration of a given product (e.g. glucose in blood) are rather primitive. Advanced receptors enable the immune system to recognize and verify information characterized by high degrees of uncertainty. […] In sequential processes it is usually the initial stage which poses the most problems and requires the most information to complete successfully. It should come as no surprise that the most advanced control loops are those associated with initial stages of biological pathways.”

February 10, 2018 Posted by | Biology, Books, Chemistry, Evolutionary biology, Genetics, Immunology, Medicine | Leave a comment

Endocrinology (part 4 – reproductive endocrinology)

Some observations from chapter 4 of the book below.

“*♂. The whole process of spermatogenesis takes approximately 74 days, followed by another 12-21 days for sperm transport through the epididymis. This means that events which may affect spermatogenesis may not be apparent for up to three months, and successful induction of spermatogenesis treatment may take 2 years. *♀. From primordial follicle to primary follicle, it takes about 180 days (a continuous process). It is then another 60 days to form a preantral follicle which then proceeds to ovulation three menstrual cycles later. Only the last 2-3 weeks of this process is under gonadotrophin drive, during which time the follicle grows from 2 to 20mm.”

“Hirsutism (not a diagnosis in itself) is the presence of excess hair growth in ♀ as a result of androgen production and skin sensitivity to androgens. […] In ♀, testosterone is secreted primarily by the ovaries and adrenal glands, although a significant amount is produced by the peripheral conversion of androstenedione and DHEA. Ovarian androgen production is regulated by luteinizing hormone, whereas adrenal production is ACTH-dependent. The predominant androgens produced by the ovaries are testosterone and androstenedione, and the adrenal glands are the main source of DHEA. Circulating testosterone is mainly bound to sex hormone-binding globulin (SHBG), and it is the free testosterone which is biologically active. […] Slowly progressive hirsutism following puberty suggests a benign cause, whereas rapidly progressive hirsutism of recent onset requires further immediate investigation to rule out an androgen-secreting neoplasm. [My italics, US] […] Serum testosterone should be measured in all ♀ presenting with hirsutism. If this is <5nmol/L, then the risk of a sinister cause for her hirsutism is low.”

“Polycystic ovary syndrome (PCOS) *A heterogeneous clinical syndrome characterized by hyperandrogenism, mainly of ovarian origin, menstrual irregularity, and hyperinsulinaemia, in which other causes of androgen excess have been excluded […] *A distinction is made between polycystic ovary morphology on ultrasound (PCO which also occurs in congenital adrenal hyperplasia, acromegaly, Cushing’s syndrome, and testesterone-secreting tumours) and PCOS – the syndrome. […] PCOS is the most common endocrinopathy in ♀ of reproductive age; >95% of ♀ presenting to outpatients with hirsutism have PCOS. *The estimated prevalence of PCOS ranges from 5 to 10% on clinical criteria. Polycystic ovaries on US alone are present in 20-25% of ♀ of reproductive age. […] family history of type 2 diabetes mellitus is […] more common in ♀ with PCOS. […] Approximately 70% of ♀ with PCOS are insulin-resistant, depending on the definition. […] Type 2 diabetes mellitus is 2-4 x more common in ♀ with PCOS. […] Hyperinsulinaemia is exacerbated by obesity but can also be present in lean ♀ with PCOS. […] Insulin […] inhibits SHBG synthesis by the liver, with a consequent rise in free androgen levels. […] Symptoms often begin around puberty, after weight gain, or after stopping the oral contraceptive pill […] Oligo-/amenorrhoea [is present in] 70% […] Hirsutism [is present in] 66% […] Obesity [is present in] 50% […] *Infertility (30%). PCOS accounts for 75% of cases of anovulatory infertility. The risk of spontaneous miscarriage is also thought to be higher than the general population, mainly because of obesity. […] The aims of investigations [of PCOS] are mainly to exclude serious underlying disorders and to screen for complications, as the diagnosis is primarily clinical […] Studies have uniformly shown that weight reduction in obese ♀ with PCOS will improve insulin sensitivity and significantly reduce hyperandrogenaemia. Obese ♀ are less likely to respond to antiandrogens and infertility treatment.”

“Androgen-secreting tumours [are] [r]are tumours of the ovary or adrenal gland which may be benign or malignant, which cause virilization in ♀ through androgen production. […] Virilization […] [i]ndicates severe hyperandrogenism, is associated with clitoromegaly, and is present in 98% of ♀ with androgen-producing tumours. Not usually a feature of PCOS. […] Androgen-secreting ovarian tumours[:] *75% develop before the age of 40 years. *Account for 0.4% of all ovarian tumours; 20% are malignant. *Tumours are 5-25cm in size. The larger they are, the more likely they are to be malignant. They are rarely bilateral. […] Androgen-secreting adrenal tumours[:] *50% develop before the age of 50 years. *Larger tumours […] are more likely to be malignant. *Usually with concomitant cortisol secretion as a variant of Cushing’s syndrome. […] Symptoms and signs of Cushing’s syndrome are present in many of ♀ with adrenal tumours. […] Onset of symptoms. Usually recent onset of rapidly progressive symptoms. […] Malignant ovarian and adrenal androgen-secreting tumours are usually resistant to chemotherapy and radiotherapy. […] *Adrenal tumours. 20% 5-year survival. Most have metastatic disease at the time of surgery. *Ovarian tumours. 30% disease-free survival and 40% overall survival at 5 years. […] Benign tumours. *Prognosis excellent. *Hirsutism improves post-operatively, but clitoromegaly, male pattern balding, and deep voice may persist.”

*Oligomenorrhoea is defined as the reduction in the frequency of menses to <9 periods a year. *1° amenorrhoea is the failure of menarche by the age of 16 years. Prevalence ~0.3% *2° amenorrhoea refers to the cessation of menses for >6 months in ♀ who had previously menstruated. Prevalence ~3%. […] Although the list of causes is long […], the majority of cases of secondary amenorrhoea can be accounted for by four conditions: *Polycystic ovary syndrome. *Hypothalamic amenorrhoea. *Hyperprolactinaemia. *Ovarian failure. […] PCOS is the only common endocrine cause of amenorrhoea with normal oestrogenization – all other causes are oestrogen-deficient. Women with PCOS, therefore, are at risk of endometrial hyperplasia, and all others are at risk of osteoporosis. […] Anosmia may indicate Kallman’s syndrome. […] In routine practice, a common differential diagnosis is between mild version of PCOS and hypothalamic amenorrhoea. The distinction between these conditions may require repeated testing, as a single snapshot may not discriminate. The reason to be precise is that PCOS is oestrogen-replete and will, therefore, respond to clomiphene citrate (an antioestrogen) for fertility. HA will be oestrogen-deficient and will need HRT and ovulation induction with pulsatile GnRH or hMG [human Menopausal Gonadotropins – US]. […] […] 75% of ♀ who develop 2° amenorrhoea report hot flushes, night sweats, mood changes, fatigue, or dyspareunia; symptoms may precede the onset of menstrual disturbances.”

“POI [Premature Ovarian Insufficiency] is a disorder characterized by amenorrhoea, oestrogen deficiency, and elevated gonadotrophins, developing in ♀ <40 years, as a result of loss of ovarian follicular function. […] *Incidence – 0.1% of ♀ <30 years and 1% of those <40 years. *Accounts for 10% of all cases of 2° amenorrhoea. […] POI is the result of accelerated depletion of ovarian germ cells. […] POI is usually permanent and progressive, although a remitting course is also experienced and cannot be fully predicted, so all women must know that pregnancy is possible, even though fertility treatments are not effective (often a difficult paradox to describe). Spontaneous pregnancy has been reported in 5%. […] 80% of [women with Turner’s syndrome] have POI. […] All ♀ presenting with hypergonadotrophic amenorrhoea below age 40 should be karyotyped.”

“The menopause is the permanent cessation of menstruation as a result of ovarian failure and is a retrospective diagnosis made after 12 months of amenorrhoea. The average age of at the time of the menopause is ~50 years, although smokers reach the menopause ~2 years earlier. […] Cycles gradually become increasingly anovulatory and variable in length (often shorter) from about 4 years prior to the menopause. Oligomenorrhoea often precedes permanent amenorrhoea. in 10% of ♀, menses cease abruptly, with no preceding transitional period. […] During the perimenopausal period, there is an accelerated loss of bone mineral density (BMD), rendering post-menopausal more susceptible to osteoporotic fractures. […] Post-menopausal are 2-3 x more likely to develop IHD [ischaemic heart disease] than premenopausal , even after age adjustments. The menopause is associated with an increase in risk factors for atherosclerosis, including less favourable lipid profile, insulin sensitivity, and an ↑ thrombotic tendency. […] ♀ are 2-3 x more likely to develop Alzheimer’s disease than ♂. It is suggested that oestrogen deficiency may play a role in the development of dementia. […] The aim of treatment of perimenopausal ♀ is to alleviate menopausal symptoms and optimize quality of life. The majority of women with mild symptoms require no HRT. […] There is an ↑ risk of breast cancer in HRT users which is related to the duration of use. The risk increases by 35%, following 5 years of use (over the age of 50), and falls to never-used risk 5 years after discontinuing HRT. For ♀ aged 50 not using HRT, about 45 in every 1,000 will have cancer diagnosed over the following 20 years. This number increases to 47/1,000 ♀ using HRT for 5 years, 51/1,000 using HRT for 10 years, and 57/1,000 after 15 years of use. The risk is highest in ♀ on combined HRT compared with oestradiol alone. […] Oral HRT increases the risk [of venous thromboembolism] approximately 3-fold, resulting in an extra two cases/10,000 women-years. This risk is markedly ↑ in ♀ who already have risk factors for DVT, including previous DVT, cardiovascular disease, and within 90 days of hospitalization. […] Data from >30 observational studies suggest that HRT may reduce the risk of developing CVD [cardiovascular disease] by up to 50%. However, randomized placebo-controlled trials […] have failed to show that HRT protects against IHD. Currently, HRT should not be prescribed to prevent cardiovascular disease.”

“Any chronic illness may affect testicular function, in particular chronic renal failure, liver cirrhosis, and haemochromatosis. […] 25% of  who develop mumps after puberty have associated orchitis, and 25-50% of these will develop 1° testicular failure. […] Alcohol excess will also cause 1° testicular failure. […] Cytotoxic drugs, particularly alkylating agents, are gonadotoxic. Infertility occurs in 50% of patients following chemotherapy, and a significant number of  require androgen replacement therapy because of low testosterone levels. […] Testosterone has direct anabolic effects on skeletal muscle and has been shown to increase muscle mass and strength when given to hypogonadal men. Lean body mass is also with a reduction in fat mass. […] Hypogonadism is a risk factor for osteoporosis. Testosterone inhibits bone resorption, thereby reducing bone turnover. Its administration to hypogonadal has been shown to improve bone mineral density and reduce the risk of developing osteoporosis. […] *Androgens stimulate prostatic growth, and testosterone replacement therapy may therefore induce symptoms of bladder outflow obstruction in with prostatic hypertrophy. *It is unlikely that testosterone increases the risk of developing prostrate cancer, but it may promote the growth of an existing cancer. […] Testosterone replacement therapy may cause a fall in both LDL and HDL cholesterol levels, the significance of which remains unclear. The effect of androgen replacement therapy on the risk of developing coronary artery disease is unknown.”

“Erectile dysfunction [is] [t]he consistent inability to achieve or maintain an erect penis sufficient for satisfactory sexual intercourse. Affects approximately 10% of and >50% of >70 years. […] Erectile dysfunction may […] occur as a result of several mechanisms: *Neurological damage. *Arterial insufficiency. *Venous incompetence. *Androgen deficiency. *Penile abnormalities. […] *Abrupt onset of erectile dysfunction which is intermittent is often psychogenic in origin. *Progressive and persistent dysfunction indicates an organic cause. […] Absence of morning erections suggests an organic cause of erectile dysfunction.”

“*Infertility, defined as failure of pregnancy after 1 year of unprotected regular (2 x week) sexual intercourse, affects ~10% of all couples. *Couples who fail to conceive after 1 years of regular unprotected sexual intercourse should be investigated. […] Causes[:] *♀ factors (e.g. PCOS, tubal damage) 35%. *♂ factors (idiopathic gonadal failure in 60%) 25%. *Combined factors 25%. *Unexplained infertility 15%. […] [♀] Fertility declines rapidly after the age of 36 years. […] Each episode of acute PID causes infertility in 10-15% of cases. *Trachomatis is responsible for half the cases of PID in developed countries. […] Unexplained infertility [is] [i]nfertility despite normal sexual intercourse occurring at least twice weakly, normal semen analysis, documentation of ovulation in several cycles, and normal patent tubes (by laparoscopy). […] 30-50% will become pregnant within 3 years of expectant management. If not pregnant by then, chances that spontaneous pregnancy will occur are greatly reduced, and ART should be considered. In ♀>34 years of age, then expectant management is not an option, and up to six cycles of IUI or IVF should be considered.”

February 9, 2018 Posted by | Books, Cancer/oncology, Cardiology, Diabetes, Genetics, Medicine, Pharmacology | Leave a comment

Systems Biology (I)

This book is really dense and is somewhat tough for me to blog. One significant problem is that: “The authors assume that the reader is already familiar with the material covered in a classic biochemistry course.” I know enough biochem to follow most of the stuff in this book, and I was definitely quite happy to have recently read John Finney’s book on the biochemical properties of water and Christopher Hall’s introduction to materials science, as both of those books’ coverage turned out to be highly relevant (these are far from the only relevant books I’ve read semi-recently – Atkins introduction to thermodynamics is another book that springs to mind) – but even so, what do you leave out when writing a post like this? I decided to leave out a lot. Posts covering books like this one are hard to write because it’s so easy for them to blow up in your face because you have to include so many details for the material included in the post to even start to make sense to people who didn’t read the original text. And if you leave out all the details, what’s really left? It’s difficult..

Anyway, some observations from the first chapters of the book below.

“[T]he biological world consists of self-managing and self-organizing systems which owe their existence to a steady supply of energy and information. Thermodynamics introduces a distinction between open and closed systems. Reversible processes occurring in closed systems (i.e. independent of their environment) automatically gravitate toward a state of equilibrium which is reached once the velocity of a given reaction in both directions becomes equal. When this balance is achieved, we can say that the reaction has effectively ceased. In a living cell, a similar condition occurs upon death. Life relies on certain spontaneous processes acting to unbalance the equilibrium. Such processes can only take place when substrates and products of reactions are traded with the environment, i.e. they are only possible in open systems. In turn, achieving a stable level of activity in an open system calls for regulatory mechanisms. When the reaction consumes or produces resources that are exchanged with the outside world at an uneven rate, the stability criterion can only be satisfied via a negative feedback loop […] cells and living organisms are thermodynamically open systems […] all structures which play a role in balanced biological activity may be treated as components of a feedback loop. This observation enables us to link and integrate seemingly unrelated biological processes. […] the biological structures most directly involved in the functions and mechanisms of life can be divided into receptors, effectors, information conduits and elements subject to regulation (reaction products and action results). Exchanging these elements with the environment requires an inflow of energy. Thus, living cells are — by their nature — open systems, requiring an energy source […] A thermodynamically open system lacking equilibrium due to a steady inflow of energy in the presence of automatic regulation is […] a good theoretical model of a living organism. […] Pursuing growth and adapting to changing environmental conditions calls for specialization which comes at the expense of reduced universality. A specialized cell is no longer self-sufficient. As a consequence, a need for higher forms of intercellular organization emerges. The structure which provides cells with suitable protection and ensures continued homeostasis is called an organism.”

“In biology, structure and function are tightly interwoven. This phenomenon is closely associated with the principles of evolution. Evolutionary development has produced structures which enable organisms to develop and maintain its architecture, perform actions and store the resources needed to survive. For this reason we introduce a distinction between support structures (which are akin to construction materials), function-related structures (fulfilling the role of tools and machines), and storage structures (needed to store important substances, achieving a compromise between tight packing and ease of access). […] Biology makes extensive use of small-molecule structures and polymers. The physical properties of polymer chains make them a key building block in biological structures. There are several reasons as to why polymers are indispensable in nature […] Sequestration of resources is subject to two seemingly contradictory criteria: 1. Maximize storage density; 2. Perform sequestration in such a way as to allow easy access to resources. […] In most biological systems, storage applies to energy and information. Other types of resources are only occasionally stored […]. Energy is stored primarily in the form of saccharides and lipids. Saccharides are derivatives of glucose, rendered insoluble (and thus easy to store) via polymerization.Their polymerized forms, stabilized with α-glycosidic bonds, include glycogen (in animals) and starch (in plantlife). […] It should be noted that the somewhat loose packing of polysaccharides […] makes them unsuitable for storing large amounts of energy. In a typical human organism only ca. 600 kcal of energy is stored in the form of glycogen, while (under normal conditions) more than 100,000 kcal exists as lipids. Lipids deposit usually assume the form of triglycerides (triacylglycerols). Their properties can be traced to the similarities between fatty acids and hydrocarbons. Storage efficiency (i.e. the amount of energy stored per unit of mass) is twice that of polysaccharides, while access remains adequate owing to the relatively large surface area and high volume of lipids in the organism.”

“Most living organisms store information in the form of tightly-packed DNA strands. […] It should be noted that only a small percentage of DNA (about few %) conveys biologically relevant information. The purpose of the remaining ballast is to enable suitable packing and exposure of these important fragments. If all of DNA were to consist of useful code, it would be nearly impossible to devise a packing strategy guaranteeing access to all of the stored information.”

“The seemingly endless diversity of biological functions frustrates all but the most persistent attempts at classification. For the purpose of this handbook we assume that each function can be associated either with a single cell or with a living organism. In both cases, biological functions are strictly subordinate to automatic regulation, based — in a stable state — on negative feedback loops, and in processes associated with change (for instance in embryonic development) — on automatic execution of predetermined biological programs. Individual components of a cell cannot perform regulatory functions on their own […]. Thus, each element involved in the biological activity of a cell or organism must necessarily participate in a regulatory loop based on processing information.”

“Proteins are among the most basic active biological structures. Most of the well-known proteins studied thus far perform effector functions: this group includes enzymes, transport proteins, certain immune system components (complement factors) and myofibrils. Their purpose is to maintain biological systems in a steady state. Our knowledge of receptor structures is somewhat poorer […] Simple structures, including individual enzymes and components of multienzyme systems, can be treated as “tools” available to the cell, while advanced systems, consisting of many mechanically-linked tools, resemble machines. […] Machinelike mechanisms are readily encountered in living cells. A classic example is fatty acid synthesis, performed by dedicated machines called synthases. […] Multiunit structures acting as machines can be encountered wherever complex biochemical processes need to be performed in an efficient manner. […] If the purpose of a machine is to generate motion then a thermally powered machine can accurately be called a motor. This type of action is observed e.g. in myocytes, where transmission involves reordering of protein structures using the energy generated by hydrolysis of high-energy bonds.”

“In biology, function is generally understood as specific physiochemical action, almost universally mediated by proteins. Most such actions are reversible which means that a single protein molecule may perform its function many times. […] Since spontaneous noncovalent surface interactions are very infrequent, the shape and structure of active sites — with high concentrations of hydrophobic residues — makes them the preferred area of interaction between functional proteins and their ligands. They alone provide the appropriate conditions for the formation of hydrogen bonds; moreover, their structure may determine the specific nature of interaction. The functional bond between a protein and a ligand is usually noncovalent and therefore reversible.”

“In general terms, we can state that enzymes accelerate reactions by lowering activation energies for processes which would otherwise occur very slowly or not at all. […] The activity of enzymes goes beyond synthesizing a specific protein-ligand complex (as in the case of antibodies or receptors) and involves an independent catalytic attack on a selected bond within the ligand, precipitating its conversion into the final product. The relative independence of both processes (binding of the ligand in the active site and catalysis) is evidenced by the phenomenon of noncompetitive inhibition […] Kinetic studies of enzymes have provided valuable insight into the properties of enzymatic inhibitors — an important field of study in medicine and drug research. Some inhibitors, particularly competitive ones (i.e. inhibitors which outcompete substrates for access to the enzyme), are now commonly used as drugs. […] Physical and chemical processes may only occur spontaneously if they generate energy, or non-spontaneously if they consume it. However, all processes occurring in a cell must have a spontaneous character because only these processes may be catalyzed by enzymes. Enzymes merely accelerate reactions; they do not provide energy. […] The change in enthalpy associated with a chemical process may be calculated as a net difference in the sum of molecular binding energies prior to and following the reaction. Entropy is a measure of the likelihood that a physical system will enter a given state. Since chaotic distribution of elements is considered the most probable, physical systems exhibit a general tendency to gravitate towards chaos. Any form of ordering is thermodynamically disadvantageous.”

“The chemical reactions which power biological processes are characterized by varying degrees of efficiency. In general, they tend to be on the lower end of the efficiency spectrum, compared to energy sources which drive matter transformation processes in our universe. In search for a common criterion to describe the efficiency of various energy sources, we can refer to the net loss of mass associated with a release of energy, according to Einstein’s formula:
E = mc2
M/M coefficient (relative loss of mass, given e.g. in %) allows us to compare the efficiency of energy sources. The most efficient processes are those involved in the gravitational collapse of stars. Their efficiency may reach 40 %, which means that 40 % of the stationary mass of the system is converted into energy. In comparison, nuclear reactions have an approximate efficiency of 0.8 %. The efficiency of chemical energy sources available to biological systems is incomparably lower and amounts to approximately 10(-7) % […]. Among chemical reactions, the most potent sources of energy are found in oxidation processes, commonly exploited by biological systems. Oxidation tends  to result in the largest net release of energy per unit of mass, although the efficiency of specific types of oxidation varies. […] given unrestricted access to atmospheric oxygen and to hydrogen atoms derived from hydrocarbons — the combustion of hydrogen (i.e. the synthesis of water; H2 + 1/2O2 = H2O) has become a principal source of energy in nature, next to photosynthesis, which exploits the energy of solar radiation. […] The basic process associated with the release of hydrogen and its subsequent oxidation (called the Krebs cycle) is carried by processes which transfer electrons onto oxygen atoms […]. Oxidation occurs in stages, enabling optimal use of the released energy. An important byproduct of water synthesis is the universal energy carrier known as ATP (synthesized separately). As water synthesis is a highly spontaneous process, it can be exploited to cover the energy debt incurred by endergonic synthesis of ATP, as long as both processes are thermodynamically coupled, enabling spontaneous catalysis of anhydride bonds in ATP. Water synthesis is a universal source of energy in heterotrophic systems. In contrast, autotrophic organisms rely on the energy of light which is exploited in the process of photosynthesis. Both processes yield ATP […] Preparing nutrients (hydrogen carriers) for participation in water synthesis follows different paths for sugars, lipids and proteins. This is perhaps obvious given their relative structural differences; however, in all cases the final form, which acts as a substrate for dehydrogenases, is acetyl-CoA“.

“Photosynthesis is a process which — from the point of view of electron transfer — can be treated as a counterpart of the respiratory chain. In heterotrophic organisms, mitochondria transport electrons from hydrogenated compounds (sugars, lipids, proteins) onto oxygen molecules, synthesizing water in the process, whereas in the course of photosynthesis electrons released by breaking down water molecules are used as a means of reducing oxydised carbon compounds […]. In heterotrophic organisms the respiratory chain has a spontaneous quality (owing to its oxidative properties); however any reverse process requires energy to occur. In the case of photosynthesis this energy is provided by sunlight […] Hydrogen combustion and photosynthesis are the basic sources of energy in the living world. […] For an energy source to become useful, non-spontaneous reactions must be coupled to its operation, resulting in a thermodynamically unified system. Such coupling can be achieved by creating a coherent framework in which the spontaneous and non-spontaneous processes are linked, either physically or chemically, using a bridging component which affects them both. If the properties of both reactions are different, the bridging component must also enable suitable adaptation and mediation. […] Direct exploitation of the energy released via the hydrolysis of ATP is possible usually by introducing an active binding carrier mediating the energy transfer. […] Carriers are considered active as long as their concentration ensures a sufficient release of energy to synthesize a new chemical bond by way of a non-spontaneous process. Active carriers are relatively short-lived […] Any active carrier which performs its function outside of the active site must be sufficiently stable to avoid breaking up prior to participating in the synthesis reaction. Such mobile carriers are usually produced when the required synthesis consists of several stages or cannot be conducted in the active site of the enzyme for sterical reasons. Contrary to ATP, active energy carriers are usually reaction-specific. […] Mobile energy carriers are usually formed as a result of hydrolysis of two high-energy ATP bonds. In many cases this is the minimum amount of energy required to power a reaction which synthesizes a single chemical bond. […] Expelling a mobile or unstable reaction component in order to increase the spontaneity of active energy carrier synthesis is a process which occurs in many biological mechanisms […] The action of active energy carriers may be compared to a ball rolling down a hill. The descending snowball gains sufficient energy to traverse another, smaller mound, adjacent to its starting point. In our case, the smaller hill represents the final synthesis reaction […] Understanding the role of active carriers is essential for the study of metabolic processes.”

“A second category of processes, directly dependent on energy sources, involves structural reconfiguration of proteins, which can be further differentiated into low and high-energy reconfiguration. Low-energy reconfiguration occurs in proteins which form weak, easily reversible bonds with ligands. In such cases, structural changes are powered by the energy released in the creation of the complex. […] Important low-energy reconfiguration processes may occur in proteins which consist of subunits. Structural changes resulting from relative motion of subunits typically do not involve significant expenditures of energy. Of particular note are the so-called allosteric proteins […] whose rearrangement is driven by a weak and reversible bond between the protein and an oxygen molecule. Allosteric proteins are genetically conditioned to possess two stable structural configurations, easily swapped as a result of binding or releasing ligands. Thus, they tend to have two comparable energy minima (separated by a low threshold), each of which may be treated as a global minimum corresponding to the native form of the protein. Given such properties, even a weakly interacting ligand may trigger significant structural reconfiguration. This phenomenon is of critical importance to a variety of regulatory proteins. In many cases, however, the second potential minimum in which the protein may achieve relative stability is separated from the global minimum by a high threshold requiring a significant expenditure of energy to overcome. […] Contrary to low-energy reconfigurations, the relative difference in ligand concentrations is insufficient to cover the cost of a difficult structural change. Such processes are therefore coupled to highly exergonic reactions such as ATP hydrolysis. […]  The link between a biological process and an energy source does not have to be immediate. Indirect coupling occurs when the process is driven by relative changes in the concentration of reaction components. […] In general, high-energy reconfigurations exploit direct coupling mechanisms while indirect coupling is more typical of low-energy processes”.

Muscle action requires a major expenditure of energy. There is a nonlinear dependence between the degree of physical exertion and the corresponding energy requirements. […] Training may improve the power and endurance of muscle tissue. Muscle fibers subjected to regular exertion may improve their glycogen storage capacity, ATP production rate, oxidative metabolism and the use of fatty acids as fuel.

February 4, 2018 Posted by | Biology, Books, Chemistry, Genetics, Pharmacology, Physics | Leave a comment

Lakes (II)

(I have had some computer issues over the last couple of weeks, which was the explanation for my brief blogging hiatus, but they should be resolved by now and as I’m already starting to fall quite a bit behind in terms of my intended coverage of the books I’ve read this year I hope to get rid of some of the backlog in the days to come.)

I have added some more observations from the second half of the book, as well as some related links, below.

“[R]ecycling of old plant material is especially important in lakes, and one way to appreciate its significance is to measure the concentration of CO2, an end product of decomposition, in the surface waters. This value is often above, sometimes well above, the value to be expected from equilibration of this gas with the overlying air, meaning that many lakes are net producers of CO2 and that they emit this greenhouse gas to the atmosphere. How can that be? […] Lakes are not sealed microcosms that function as stand-alone entities; on the contrary, they are embedded in a landscape and are intimately coupled to their terrestrial surroundings. Organic materials are produced within the lake by the phytoplankton, photosynthetic cells that are suspended in the water and that fix CO2, release oxygen (O2), and produce biomass at the base of the aquatic food web. Photosynthesis also takes place by attached algae (the periphyton) and submerged water plants (aquatic macrophytes) that occur at the edge of the lake where enough sunlight reaches the bottom to allow their growth. But additionally, lakes are the downstream recipients of terrestrial runoff from their catchments […]. These continuous inputs include not only water, but also subsidies of plant and soil organic carbon that are washed into the lake via streams, rivers, groundwater, and overland flows. […] The organic carbon entering lakes from the catchment is referred to as ‘allochthonous’, meaning coming from the outside, and it tends to be relatively old […] In contrast, much younger organic carbon is available […] as a result of recent photosynthesis by the phytoplankton and littoral communities; this carbon is called ‘autochthonous’, meaning that it is produced within the lake.”

“It used to be thought that most of the dissolved organic matter (DOM) entering lakes, especially the coloured fraction, was unreactive and that it would transit the lake to ultimately leave unchanged at the outflow. However, many experiments and field observations have shown that this coloured material can be partially broken down by sunlight. These photochemical reactions result in the production of CO2, and also the degradation of some of the organic polymers into smaller organic molecules; these in turn are used by bacteria and decomposed to CO2. […] Most of the bacterial species in lakes are decomposers that convert organic matter into mineral end products […] This sunlight-driven chemistry begins in the rivers, and continues in the surface waters of the lake. Additional chemical and microbial reactions in the soil also break down organic materials and release CO2 into the runoff and ground waters, further contributing to the high concentrations in lake water and its emission to the atmosphere. In algal-rich ‘eutrophic’ lakes there may be sufficient photosynthesis to cause the drawdown of CO2 to concentrations below equilibrium with the air, resulting in the reverse flux of this gas, from the atmosphere into the surface waters.”

“There is a precarious balance in lakes between oxygen gains and losses, despite the seemingly limitless quantities in the overlying atmosphere. This balance can sometimes tip to deficits that send a lake into oxygen bankruptcy, with the O2 mostly or even completely consumed. Waters that have O2 concentrations below 2mg/L are referred to as ‘hypoxic’, and will be avoided by most fish species, while waters in which there is a complete absence of oxygen are called ‘anoxic’ and are mostly the domain for specialized, hardy microbes. […] In many temperate lakes, mixing in spring and again in autumn are the critical periods of re-oxygenation from the overlying atmosphere. In summer, however, the thermocline greatly slows down that oxygen transfer from air to deep water, and in cooler climates, winter ice-cover acts as another barrier to oxygenation. In both of these seasons, the oxygen absorbed into the water during earlier periods of mixing may be rapidly consumed, leading to anoxic conditions. Part of the reason that lakes are continuously on the brink of anoxia is that only limited quantities of oxygen can be stored in water because of its low solubility. The concentration of oxygen in the air is 209 millilitres per litre […], but cold water in equilibrium with the atmosphere contains only 9ml/L […]. This scarcity of oxygen worsens with increasing temperature (from 4°C to 30°C the solubility of oxygen falls by 43 per cent), and it is compounded by faster rates of bacterial decomposition in warmer waters and thus a higher respiratory demand for oxygen.”

“Lake microbiomes play multiple roles in food webs as producers, parasites, and consumers, and as steps into the animal food chain […]. These diverse communities of microbes additionally hold centre stage in the vital recycling of elements within the lake ecosystem […]. These biogeochemical processes are not simply of academic interest; they totally alter the nutritional value, mobility, and even toxicity of elements. For example, sulfate is the most oxidized and also most abundant form of sulfur in natural waters, and it is the ion taken up by phytoplankton and aquatic plants to meet their biochemical needs for this element. These photosynthetic organisms reduce the sulfate to organic sulfur compounds, and once they die and decompose, bacteria convert these compounds to the rotten-egg smelling gas, H2S, which is toxic to most aquatic life. In anoxic waters and sediments, this effect is amplified by bacterial sulfate reducers that directly convert sulfate to H2S. Fortunately another group of bacteria, sulfur oxidizers, can use H2S as a chemical energy source, and in oxygenated waters they convert this reduced sulfur back to its benign, oxidized, sulfate form. […] [The] acid neutralizing capacity (or ‘alkalinity’) varies greatly among lakes. Many lakes in Europe, North America, and Asia have been dangerously shifted towards a low pH because they lacked sufficient carbonate to buffer the continuous input of acid rain that resulted from industrial pollution of the atmosphere. The acid conditions have negative effects on aquatic animals, including by causing a shift in aluminium to its more soluble and toxic form Al3+. Fortunately, these industrial emissions have been regulated and reduced in most of the developed world, although there are still legacy effects of acid rain that have resulted in a long-term depletion of carbonates and associated calcium in certain watersheds.”

“Rotifers, cladocerans, and copepods are all planktonic, that is their distribution is strongly affected by currents and mixing processes in the lake. However, they are also swimmers, and can regulate their depth in the water. For the smallest such as rotifers and copepods, this swimming ability is limited, but the larger zooplankton are able to swim over an impressive depth range during the twenty-four-hour ‘diel’ (i.e. light–dark) cycle. […] the cladocerans in Lake Geneva reside in the thermocline region and deep epilimnion during the day, and swim upwards by about 10m during the night, while cyclopoid copepods swim up by 60m, returning to the deep, dark, cold waters of the profundal zone during the day. Even greater distances up and down the water column are achieved by larger animals. The opossum shrimp, Mysis (up to 25mm in length) lives on the bottom of lakes during the day and in Lake Tahoe it swims hundreds of metres up into the surface waters, although not on moon-lit nights. In Lake Baikal, one of the main zooplankton species is the endemic amphipod, Macrohectopus branickii, which grows up to 38mm in size. It can form dense swarms at 100–200m depth during the day, but the populations then disperse and rise to the upper waters during the night. These nocturnal migrations connect the pelagic surface waters with the profundal zone in lake ecosystems, and are thought to be an adaptation towards avoiding visual predators, especially pelagic fish, during the day, while accessing food in the surface waters under the cover of nightfall. […] Although certain fish species remain within specific zones of the lake, there are others that swim among zones and access multiple habitats. […] This type of fish migration means that the different parts of the lake ecosystem are ecologically connected. For many fish species, moving between habitats extends all the way to the ocean. Anadromous fish migrate out of the lake and swim to the sea each year, and although this movement comes at considerable energetic cost, it has the advantage of access to rich marine food sources, while allowing the young to be raised in the freshwater environment with less exposure to predators. […] With the converse migration pattern, catadromous fish live in freshwater and spawn in the sea.”

“Invasive species that are the most successful and do the most damage once they enter a lake have a number of features in common: fast growth rates, broad tolerances, the capacity to thrive under high population densities, and an ability to disperse and colonize that is enhanced by human activities. Zebra mussels (Dreissena polymorpha) get top marks in each of these categories, and they have proven to be a troublesome invader in many parts of the world. […] A single Zebra mussel can produce up to one million eggs over the course of a spawning season, and these hatch into readily dispersed larvae (‘veligers’), that are free-swimming for up to a month. The adults can achieve densities up to hundreds of thousands per square metre, and their prolific growth within water pipes has been a serious problem for the cooling systems of nuclear and thermal power stations, and for the intake pipes of drinking water plants. A single Zebra mussel can filter a litre a day, and they have the capacity to completely strip the water of bacteria and protists. In Lake Erie, the water clarity doubled and diatoms declined by 80–90 per cent soon after the invasion of Zebra mussels, with a concomitant decline in zooplankton, and potential impacts on planktivorous fish. The invasion of this species can shift a lake from dominance of the pelagic to the benthic food web, but at the expense of native unionid clams on the bottom that can become smothered in Zebra mussels. Their efficient filtering capacity may also cause a regime shift in primary producers, from turbid waters with high concentrations of phytoplankton to a clearer lake ecosystem state in which benthic water plants dominate.”

“One of the many distinguishing features of H2O is its unusually high dielectric constant, meaning that it is a strongly polar solvent with positive and negative charges that can stabilize ions brought into solution. This dielectric property results from the asymmetrical electron cloud over the molecule […] and it gives liquid water the ability to leach minerals from rocks and soils as it passes through the ground, and to maintain these salts in solution, even at high concentrations. Collectively, these dissolved minerals produce the salinity of the water […] Sea water is around 35ppt, and its salinity is mainly due to the positively charged ions sodium (Na+), potassium (K+), magnesium (Mg2+), and calcium (Ca2+), and the negatively charged ions chloride (Cl), sulfate (SO42-), and carbonate CO32-). These solutes, collectively called the ‘major ions’, conduct electrons, and therefore a simple way to track salinity is to measure the electrical conductance of the water between two electrodes set a known distance apart. Lake and ocean scientists now routinely take profiles of salinity and temperature with a CTD: a submersible instrument that records conductance, temperature, and depth many times per second as it is lowered on a rope or wire down the water column. Conductance is measured in Siemens (or microSiemens (µS), given the low salt concentrations in freshwater lakes), and adjusted to a standard temperature of 25°C to give specific conductivity in µS/cm. All freshwater lakes contain dissolved minerals, with specific conductivities in the range 50–500µS/cm, while salt water lakes have values that can exceed sea water (about 50,000µS/cm), and are the habitats for extreme microbes”.

“The World Register of Dams currently lists 58,519 ‘large dams’, defined as those with a dam wall of 15m or higher; these collectively store 16,120km3 of water, equivalent to 213 years of flow of Niagara Falls on the USA–Canada border. […] Around a hundred large dam projects are in advanced planning or construction in Africa […]. More than 300 dams are planned or under construction in the Amazon Basin of South America […]. Reservoirs have a number of distinguishing features relative to natural lakes. First, the shape (‘morphometry’) of their basins is rarely circular or oval, but instead is often dendritic, with a tree-like main stem and branches ramifying out into the submerged river valleys. Second, reservoirs typically have a high catchment area to lake area ratio, again reflecting their riverine origins. For natural lakes, this ratio is relatively low […] These proportionately large catchments mean that reservoirs have short water residence times, and water quality is much better than might be the case in the absence of this rapid flushing. Nonetheless, noxious algal blooms can develop and accumulate in isolated bays and side-arms, and downstream next to the dam itself. Reservoirs typically experience water level fluctuations that are much larger and more rapid than in natural lakes, and this limits the development of littoral plants and animals. Another distinguishing feature of reservoirs is that they often show a longitudinal gradient of conditions. Upstream, the river section contains water that is flowing, turbulent, and well mixed; this then passes through a transition zone into the lake section up to the dam, which is often the deepest part of the lake and may be stratified and clearer due to decantation of land-derived particles. In some reservoirs, the water outflow is situated near the base of the dam within the hypolimnion, and this reduces the extent of oxygen depletion and nutrient build-up, while also providing cool water for fish and other animal communities below the dam. There is increasing attention being given to careful regulation of the timing and magnitude of dam outflows to maintain these downstream ecosystems. […] The downstream effects of dams continue out into the sea, with the retention of sediments and nutrients in the reservoir leaving less available for export to marine food webs. This reduction can also lead to changes in shorelines, with a retreat of the coastal delta and intrusion of seawater because natural erosion processes can no longer be offset by resupply of sediments from upstream.”

“One of the most serious threats facing lakes throughout the world is the proliferation of algae and water plants caused by eutrophication, the overfertilization of waters with nutrients from human activities. […] Nutrient enrichment occurs both from ‘point sources’ of effluent discharged via pipes into the receiving waters, and ‘nonpoint sources’ such the runoff from roads and parking areas, agricultural lands, septic tank drainage fields, and terrain cleared of its nutrient- and water-absorbing vegetation. By the 1970s, even many of the world’s larger lakes had begun to show worrying signs of deterioration from these sources of increasing enrichment. […] A sharp drop in water clarity is often among the first signs of eutrophication, although in forested areas this effect may be masked for many years by the greater absorption of light by the coloured organic materials that are dissolved within the lake water. A drop in oxygen levels in the bottom waters during stratification is another telltale indicator of eutrophication, with the eventual fall to oxygen-free (anoxic) conditions in these lower strata of the lake. However, the most striking impact with greatest effect on ecosystem services is the production of harmful algal blooms (HABs), specifically by cyanobacteria. In eutrophic, temperate latitude waters, four genera of bloom-forming cyanobacteria are the usual offenders […]. These may occur alone or in combination, and although each has its own idiosyncratic size, shape, and lifestyle, they have a number of impressive biological features in common. First and foremost, their cells are typically full of hydrophobic protein cases that exclude water and trap gases. These honeycombs of gas-filled chambers, called ‘gas vesicles’, reduce the density of the cells, allowing them to float up to the surface where there is light available for growth. Put a drop of water from an algal bloom under a microscope and it will be immediately apparent that the individual cells are extremely small, and that the bloom itself is composed of billions of cells per litre of lake water.”

“During the day, the [algal] cells capture sunlight and produce sugars by photosynthesis; this increases their density, eventually to the point where they are heavier than the surrounding water and sink to more nutrient-rich conditions at depth in the water column or at the sediment surface. These sugars are depleted by cellular respiration, and this loss of ballast eventually results in cells becoming less dense than water and floating again towards the surface. This alternation of sinking and floating can result in large fluctuations in surface blooms over the twenty-four-hour cycle. The accumulation of bloom-forming cyanobacteria at the surface gives rise to surface scums that then can be blown into bays and washed up onto beaches. These dense populations of colonies in the water column, and especially at the surface, can shade out bottom-dwelling water plants, as well as greatly reduce the amount of light for other phytoplankton species. The resultant ‘cyanobacterial dominance’ and loss of algal species diversity has negative implications for the aquatic food web […] This negative impact on the food web may be compounded by the final collapse of the bloom and its decomposition, resulting in a major drawdown of oxygen. […] Bloom-forming cyanobacteria are especially troublesome for the management of drinking water supplies. First, there is the overproduction of biomass, which results in a massive load of algal particles that can exceed the filtration capacity of a water treatment plant […]. Second, there is an impact on the taste of the water. […] The third and most serious impact of cyanobacteria is that some of their secondary compounds are highly toxic. […] phosphorus is the key nutrient limiting bloom development, and efforts to preserve and rehabilitate freshwaters should pay specific attention to controlling the input of phosphorus via point and nonpoint discharges to lakes.”

The viral shunt in marine foodwebs.
Proteobacteria. Alphaproteobacteria. Betaproteobacteria. Gammaproteobacteria.
Carbon cycle. Nitrogen cycle. AmmonificationAnammox. Comammox.
Phosphorus cycle.
Littoral zone. Limnetic zone. Profundal zone. Benthic zone. Benthos.
Phytoplankton. Diatom. Picoeukaryote. Flagellates. Cyanobacteria.
Trophic state (-index).
Amphipoda. Rotifer. Cladocera. Copepod. Daphnia.
Redfield ratio.
Extremophile. Halophile. Psychrophile. Acidophile.
Caspian Sea. Endorheic basin. Mono Lake.
Alpine lake.
Meromictic lake.
Subglacial lake. Lake Vostock.
Thermus aquaticus. Taq polymerase.
Lake Monoun.
Microcystin. Anatoxin-a.



February 2, 2018 Posted by | Biology, Books, Botany, Chemistry, Ecology, Engineering, Zoology | Leave a comment

Books 2018

This is a list of books I’ve read this year. As usual ‘f’ = fiction, ‘m’ = miscellaneous, ‘nf’ = non-fiction; the numbers in parentheses indicate my goodreads ratings of the books (from 1-5).

I’ll try to keep updating the post throughout the year.

i. Complexity: A Very Short Introduction (nf. Oxford University Press). Blog coverage here.

ii. Rivers: A Very Short Introduction (1, nf. Oxford University Press). Short goodreads review here. Blog coverage here and here.

iii. Something for the Pain: Compassion and Burnout in the ER (2, m. W. W. Norton & Company/Paul Austin).

iv. Mountains: A Very Short Introduction (1, nf. Oxford University Press). Short goodreads review here.

v. Water: A Very Short Introduction (4, nf. Oxford University Press). Goodreads review here.

vi. Assassin’s Quest (3, f). Robin Hobb. Goodreads review here.

vii. Oxford Handbook of Endocrinology and Diabetes (3rd edition) (5, nf. Oxford University Press). Goodreads review here. Blog coverage here, here, here, here, and here. I added this book to my list of favourite books on goodreads. Some of the specific chapters included are ‘book-equivalents’; this book is very long and takes a lot of work.

viii. Desolation Island (3, f). Patrick O’Brian.

ix. The Fortune of War (4, f). Patrick O’Brian.

x. Lakes: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here and here.

xi. The Surgeon’s Mate (4, f). Patrick O’Brian. Short goodreads review here.

xii. Domestication of Plants in the Old World: The Origin and Spread of Domesticated Plants in South-West Asia, Europe, and the Mediterranean Basin (5, nf. Oxford University Press). Goodreads review here. I added this book to my list of favourite books on goodreads.

xiii. The Ionian Mission (4, f). Patrick O’Brian.

xiv. Systems Biology: Functional Strategies of Living Organisms (4, nf. Springer). Blog coverage here, here, and here.

xv. Treason’s Harbour (4, f). Patrick O’Brian.

xvi. Peripheral Neuropathy – A New Insight into the Mechanism, Evaluation and Management of a Complex Disorder (3, nf. InTech). Blog coverage here and here.

xvii. The portable door (5, f). Tom Holt. Goodreads review here.

xviii. Prevention of Late-Life Depression: Current Clinical Challenges and Priorities (2, nf. Humana Press). Blog coverage here and here.

xix. In your dreams (4, f). Tom Holt.

xx. Earth, Air, Fire and Custard (3, f). Tom Holt. Short goodreads review here.

xxi. You Don’t Have to Be Evil to Work Here, But it Helps (3, f). Tom Holt.

xxii. The Ice Age: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here and here.

xxiii. The Better Mousetrap (4, f). Tom Holt.

xxiv. May Contain Traces of Magic (2, f). Tom Holt.

xxv. Expecting Someone Taller (4, f). Tom Holt.

xxvi. The Computer: A Very Short Introduction (2, nf. Oxford University Press). Short goodreads review here. Blog coverage here.

xxvii. Who’s Afraid of Beowulf? (5, f). Tom Holt.

xxviii. Flying Dutch (4, f). Tom Holt.

xxix. Ye Gods! (2, f). Tom Holt.

xxx. Marine Biology: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here and here.

xxxi. Here Comes The Sun (2, f). Tom Holt.

xxxii. Grailblazers (4, f). Tom Holt.

xxxiii. Oceans: A Very Short Introduction (2, nf. Oxford University Press). Very short goodreads review here.

xxxiv. Oxford Handbook of Medical Statistics (2, nf. Oxford University Press). Long, takes some work. Goodreads review here.

xxxv. Faust Among Equals (3, f). Tom Holt.

xxxvi. My Hero (3, f). Tom Holt. Short goodreads review here.

xxxvii. Odds and Gods (3, f). Tom Holt.

February 2, 2018 Posted by | Books, Personal | Leave a comment

Lakes (I)

“The aim of this book is to provide a condensed overview of scientific knowledge about lakes, their functioning as ecosystems that we are part of and depend upon, and their responses to environmental change. […] Each chapter briefly introduces concepts about the physical, chemical, and biological nature of lakes, with emphasis on how these aspects are connected, the relationships with human needs and impacts, and the implications of our changing global environment.”

I’m currently reading this book and I really like it so far. I have added some observations from the first half of the book and some coverage-related links below.

“High resolution satellites can readily detect lakes above 0.002 kilometres square (km2) in area; that’s equivalent to a circular waterbody some 50m across. Using this criterion, researchers estimate from satellite images that the world contains 117 million lakes, with a total surface area amounting to 5 million km2. […] continuous accumulation of materials on the lake floor, both from inflows and from the production of organic matter within the lake, means that lakes are ephemeral features of the landscape, and from the moment of their creation onwards, they begin to fill in and gradually disappear. The world’s deepest and most ancient freshwater ecosystem, Lake Baikal in Russia (Siberia), is a compelling example: it has a maximum depth of 1,642m, but its waters overlie a much deeper basin that over the twenty-five million years of its geological history has become filled with some 7,000m of sediments. Lakes are created in a great variety of ways: tectonic basins formed by movements in the Earth’s crust, the scouring and residual ice effects of glaciers, as well as fluvial, volcanic, riverine, meteorite impacts, and many other processes, including human construction of ponds and reservoirs. Tectonic basins may result from a single fault […] or from a series of intersecting fault lines. […] The oldest and deepest lakes in the world are generally of tectonic origin, and their persistence through time has allowed the evolution of endemic plants and animals; that is, species that are found only at those sites.”

“In terms of total numbers, most of the world’s lakes […] owe their origins to glaciers that during the last ice age gouged out basins in the rock and deepened river valleys. […] As the glaciers retreated, their terminal moraines (accumulations of gravel and sediments) created dams in the landscape, raising water levels or producing new lakes. […] During glacial retreat in many areas of the world, large blocks of glacial ice broke off and were left behind in the moraines. These subsequently melted out to produce basins that filled with water, called ‘kettle’ or ‘pothole’ lakes. Such waterbodies are well known across the plains of North America and Eurasia. […] The most violent of lake births are the result of volcanoes. The craters left behind after a volcanic eruption can fill with water to form small, often circular-shaped and acidic lakes. […] Much larger lakes are formed by the collapse of a magma chamber after eruption to produce caldera lakes. […] Craters formed by meteorite impacts also provide basins for lakes, and have proved to be of great scientific as well as human interest. […] There was a time when limnologists paid little attention to small lakes and ponds, but, this has changed with the realization that although such waterbodies are modest in size, they are extremely abundant throughout the world and make up a large total surface area. Furthermore, these smaller waterbodies often have high rates of chemical activity such as greenhouse gas production and nutrient cycling, and they are major habitats for diverse plants and animals”.

“For Forel, the science of lakes could be subdivided into different disciplines and subjects, all of which continue to occupy the attention of freshwater scientists today […]. First, the physical environment of a lake includes its geological origins and setting, the water balance and exchange of heat with the atmosphere, as well as the penetration of light, the changes in temperature with depth, and the waves, currents, and mixing processes that collectively determine the movement of water. Second, the chemical environment is important because lake waters contain a great variety of dissolved materials (‘solutes’) and particles that play essential roles in the functioning of the ecosystem. Third, the biological features of a lake include not only the individual species of plants, microbes, and animals, but also their organization into food webs, and the distribution and functioning of these communities across the bottom of the lake and in the overlying water.”

“In the simplest hydrological terms, lakes can be thought of as tanks of water in the landscape that are continuously topped up by their inflowing rivers, while spilling excess water via their outflow […]. Based on this model, we can pose the interesting question: how long does the average water molecule stay in the lake before leaving at the outflow? This value is referred to as the water residence time, and it can be simply calculated as the total volume of the lake divided by the water discharge at the outlet. This lake parameter is also referred to as the ‘flushing time’ (or ‘flushing rate’, if expressed as a proportion of the lake volume discharged per unit of time) because it provides an estimate of how fast mineral salts and pollutants can be flushed out of the lake basin. In general, lakes with a short flushing time are more resilient to the impacts of human activities in their catchments […] Each lake has its own particular combination of catchment size, volume, and climate, and this translates into a water residence time that varies enormously among lakes [from perhaps a month to more than a thousand years, US] […] A more accurate approach towards calculating the water residence time is to consider the question: if the lake were to be pumped dry, how long would it take to fill it up again? For most lakes, this will give a similar value to the outflow calculation, but for lakes where evaporation is a major part of the water balance, the residence time will be much shorter.”

“Each year, mineral and organic particles are deposited by wind on the lake surface and are washed in from the catchment, while organic matter is produced within the lake by aquatic plants and plankton. There is a continuous rain of this material downwards, ultimately accumulating as an annual layer of sediment on the lake floor. These lake sediments are storehouses of information about past changes in the surrounding catchment, and they provide a long-term memory of how the limnology of a lake has responded to those changes. The analysis of these natural archives is called ‘palaeolimnology’ (or ‘palaeoceanography’ for marine studies), and this branch of the aquatic sciences has yielded enormous insights into how lakes change through time, including the onset, effects, and abatement of pollution; changes in vegetation both within and outside the lake; and alterations in regional and global climate.”

“Sampling for palaeolimnological analysis is typically undertaken in the deepest waters to provide a more integrated and complete picture of the lake basin history. This is also usually the part of the lake where sediment accumulation has been greatest, and where the disrupting activities of bottom-dwelling animals (‘bioturbation’ of the sediments) may be reduced or absent. […] Some of the most informative microfossils to be found in lake sediments are diatoms, an algal group that has cell walls (‘frustules’) made of silica glass that resist decomposition. Each lake typically contains dozens to hundreds of different diatom species, each with its own characteristic set of environmental preferences […]. A widely adopted approach is to sample many lakes and establish a statistical relationship or ‘transfer function’ between diatom species composition (often by analysis of surface sediments) and a lake water variable such as temperature, pH, phosphorus, or dissolved organic carbon. This quantitative species–environment relationship can then be applied to the fossilized diatom species assemblage in each stratum of a sediment core from a lake in the same region, and in this way the physical and chemical fluctuations that the lake has experienced in the past can be reconstructed or ‘hindcast’ year-by-year. Other fossil indicators of past environmental change include algal pigments, DNA of algae and bacteria including toxic bloom species, and the remains of aquatic animals such as ostracods, cladocerans, and larval insects.”

“In lake and ocean studies, the penetration of sunlight into the water can be […] precisely measured with an underwater light meter (submersible radiometer), and such measurements always show that the decline with depth follows a sharp curve rather than a straight line […]. This is because the fate of sunlight streaming downwards in water is dictated by the probability of the photons being absorbed or deflected out of the light path; for example, a 50 per cent probability of photons being lost from the light beam by these processes per metre depth in a lake would result in sunlight values dropping from 100 per cent at the surface to 50 per cent at 1m, 25 per cent at 2m, 12.5 per cent at 3m, and so on. The resulting exponential curve means that for all but the clearest of lakes, there is only enough solar energy for plants, including photosynthetic cells in the plankton (phytoplankton), in the upper part of the water column. […] The depth limit for underwater photosynthesis or primary production is known as the ‘compensation depth‘. This is the depth at which carbon fixed by photosynthesis exactly balances the carbon lost by cellular respiration, so the overall production of new biomass (net primary production) is zero. This depth often corresponds to an underwater light level of 1 per cent of the sunlight just beneath the water surface […] The production of biomass by photosynthesis takes place at all depths above this level, and this zone is referred to as the ‘photic’ zone. […] biological processes in [the] ‘aphotic zone’ are mostly limited to feeding and decomposition. A Secchi disk measurement can be used as a rough guide to the extent of the photic zone: in general, the 1 per cent light level is about twice the Secchi depth.”

“[W]ater colour is now used in […] many powerful ways to track changes in water quality and other properties of lakes, rivers, estuaries, and the ocean. […] Lakes have different colours, hues, and brightness levels as a result of the materials that are dissolved and suspended within them. The purest of lakes are deep blue because the water molecules themselves absorb light in the green and, to a greater extent, red end of the spectrum; they scatter the remaining blue photons in all directions, mostly downwards but also back towards our eyes. […] Algae in the water typically cause it to be green and turbid because their suspended cells and colonies contain chlorophyll and other light-capturing molecules that absorb strongly in the blue and red wavebands, but not green. However there are some notable exceptions. Noxious algal blooms dominated by cyanobacteria are blue-green (cyan) in colour caused by their blue-coloured protein phycocyanin, in addition to chlorophyll.”

“[A]t the largest dimension, at the scale of the entire lake, there has to be a net flow from the inflowing rivers to the outflow, and […] from this landscape perspective, lakes might be thought of as enlarged rivers. Of course, this riverine flow is constantly disrupted by wind-induced movements of the water. When the wind blows across the surface, it drags the surface water with it to generate a downwind flow, and this has to be balanced by a return movement of water at depth. […] In large lakes, the rotation of the Earth has plenty of time to exert its weak effect as the water moves from one side of the lake to the other. As a result, the surface water no longer flows in a straight line, but rather is directed into two or more circular patterns or gyres that can move nearshore water masses rapidly into the centre of the lake and vice-versa. Gyres can therefore be of great consequence […] Unrelated to the Coriolis Effect, the interaction between wind-induced currents and the shoreline can also cause water to flow in circular, individual gyres, even in smaller lakes. […] At a much smaller scale, the blowing of wind across a lake can give rise to downward spiral motions in the water, called ‘Langmuir cells‘. […] These circulation features are commonly observed in lakes, where the spirals progressing in the general direction of the wind concentrate foam (on days of white-cap waves) or glossy, oily materials (on less windy days) into regularly spaced lines that are parallel to the direction of the wind. […] Density currents must also be included in this brief discussion of water movement […] Cold river water entering a warm lake will be denser than its surroundings and therefore sinks to the buttom, where it may continue to flow for considerable distances. […] Density currents contribute greatly to inshore-offshore exchanges of water, with potential effects on primary productivity, depp-water oxygenation, and the dispersion of pollutants.”


Drainage basin.
Lake Geneva. Lake Malawi. Lake Tanganyika. Lake Victoria. Lake Biwa. Lake Titicaca.
English Lake District.
Proglacial lakeLake Agassiz. Lake Ojibway.
Lake Taupo.
Manicouagan Reservoir.
Subglacial lake.
Thermokarst (-lake).
Bathymetry. Bathymetric chart. Hypsographic curve.
Várzea forest.
Lake Chad.
Colored dissolved organic matter.
H2O Temperature-density relationship. Thermocline. Epilimnion. Hypolimnion. Monomictic lake. Dimictic lake. Lake stratification.
Capillary wave. Gravity wave. Seiche. Kelvin wave. Poincaré wave.
Benthic boundary layer.
Kelvin–Helmholtz instability.

January 22, 2018 Posted by | Biology, Books, Botany, Chemistry, Geology, Paleontology, Physics | Leave a comment

Magnus Carlsen playing bullet on Lichess

This guy’s ‘pretty good’. Here’s an unrelated video of Svidler and Carlsen analyzing their game at Wijk ann Zee:

(There should be more videos like this! This stuff’s awesome!)

The first round of the Pro Chess League was played earlier this week. This is a pretty good time to be alive if you like chess.

January 20, 2018 Posted by | Chess | 1 Comment


The great majority of the words included below are words which I encountered while reading Gene Wolfe’s The Shadow of the torturer. The rest of the words are words which I encountered while reading The Oxford Handbook of Endocrinology and Diabetes as well as various ‘A Short Introduction to…‘-books.

Coloboma. Paresis. Exstrophy. Transhumance. Platybasia. Introitus. Ichthyology. Atresia. Nival. Dormer. Tussock. Mullion. Tholus. Delectation. Carnelian. Camisa. Soubrette. Cacogenic. Anacrisis. Sedge.

Barbican. Gallipot. Stele. Badelaire. Chalcedony. Helve. Armiger. Caracara. Saros. Blazon. Presentment. Refectory. Citrine. Eidolon. Obverse. Glaive. Inutile. Hypostase. Leman. Pursuivant.

Cabochon. Palfrenier. Limpid. Burse. Thurible. Anacreontic. Pardine. Nigrescent. Chrism. Pageantry. Capybara. Tinsel. Rebec. Shewbread. Excruciation. Cataphract. Sateen. Dhow. Rheostat. Caique.

Baldric. Paterissa. Bartizan. Peltast. Dray. Lochage. Miter. Discommode. Lambrequin. Dross. Proscenium. Jelab. Cymar/simar. Vicuna. Monomachy. Champian. Dulcimer. Lamia. Nidorous. Mensal.

January 19, 2018 Posted by | Books, Language | Leave a comment

Endocrinology (part 3 – adrenal glands)

Some observations from chapter 3 below.

“The normal adrenal gland weigh 4-5g. The cortex represents 90% of the normal gland and surrounds the medulla. […] Glucocorticoid (cortisol […]) production occurs from the zona fasciculata, and adrenal androgens arise from the zona reticularis. Both of these are under the control of ACTH [see also my previous post about the book – US], which regulates both steroid synthesis and also adrenocortical growth. […] Mineralocorticoid (aldosterone […]) synthesis occurs in zona glomerulosa, predominantly under the control of the renin-angiotensin system […], although ACTH also contributes to its regulation. […] The adrenal gland […] also produces sex steroids in the form of dehydroepiandrostenedione (DHEA) and androstenedione. The synthetic pathway is under the control of ACTH. Urinary steroid profiling provides quantitative information on the biosynthetic and catabolic pathways. […] CT is the most widely used modality for imaging the adrenal glands. […] MRI can also reliably detect adrenal masses >5-10mm in diameter and, in some circumstances, provides additional information to CT […] PET can be useful in locating tumours and metastases. […] Adrenal vein sampling (AVS) […] can be useful to lateralize an adenoma or to differentiate an adenoma from bilateral hyperplasia. […] AVS is of particular value in lateralizing small aldosterone-producing adenomas that cannot easily be visualized on CT or MRI. […] The procedure should only be undertaken in patients in whom surgery is feasible and desired […] [and] should be carried out in specialist centres only; centres with <20 procedures per year have been shown to have poor success rates”.

“The majority of cases of mineralocorticoid excess are due to excess aldosterone production, […] typically associated with hypertension and hypokalemia. *Primary hyperaldosteronism is a disorder of autonomous aldosterone hypersecretion with suppressed renin levels. *Secondary hyperaldosteronism occurs when aldosterone hypersecretion occurs 2° [secondary, US] to elevated circulating renin levels. This is typical of heart failure, cirrhosis, or nephrotic syndrome but can also be due to renal artery stenosis and, occasionally, a very rare renin-producing tumour (reninoma). […] Primary hyperaldosteronism is present in around 10% of hypertensive patients. It is the most prevalent form of secondary hypertension. […] Aldosterone causes renal sodium retention and potassium loss. This results in expansion of body sodium content, leading to suppression of renal renin synthesis. The direct action of aldosterone on the distal nephron causes sodium retention and loss and hydrogen and potassium ions, resulting in a hypokalaemic alkalosis, although serum potassium […] may be normal in up to 50% of cases. Aldosterone has pathophysiological effects on a range of other tissues, causing cardiac fibrosis, vascular endothelial dysfunction, and nephrosclerosis. […] hypertension […] is often resistant to conventional therapy. […] Hypokalaemia is usually asymptomatic. […] Occasionally, the clinical syndrome of hyperaldosteronism is not associated with excess aldosterone. […] These conditions are rare.”

“Bilateral adrenal hyperplasia [make up] 60% [of cases of primary hyperaldosteronism]. […] Conn’s syndrome (aldosterone-producing adrenal adenoma) [make up] 35%. […] The pathophysiology of bilateral adrenal hyperplasia is not understood, and it is possible that it represents an extreme end of the spectrum of low renin essential hypertension. […] Aldosterone-producing carcinoma[s] [are] [r]are and usually associated with excessive secretion of other corticosteroids (cortisol, androgen, oestrogen). […] Indications [for screening include:] *Patients resistant to conventional antihypertensive medication (i.e. not controlled on three agents). *Hypertension associated with hypokalaemia […] *Hypertension developing before age of 40 years. […] Confirmation of autonomous aldosterone production is made by demonstrating failure to suppress aldosterone in face of sodium/volume loading. […] A number of tests have been described that are said to differentiate between the various subtypes of 1° [primary, US] aldosteronism […]. However, none of these are sufficiently specific to influence management decisions”.

“Laparoscopic adrenalectomy is the treatment of choice for aldosterone-secreting adenomas […] and laparoscopic adrenalectomy […] has become the procedure of choice for removal of most adrenal tumours. *Hypertension is cured in about 70%. *If it persists […], it is more amenable to medical treatment. *Overall, 50% become normotensive in 1 month and 70% within 1 year. […] Medical therapy remains an option for patients with bilateral disease and those with a solitary adrenal adenoma who are unlikely to be cured by surgery, who are unfit for operation, or who express a preference for medical management. *The mineralocorticoid receptor antagonist spironolactone […] has been used successfully for many years to treat hypertension and hypokalaemia associated with bilateral adrenal hyperplasia […] Side effects are common – particularly gynaecomastia and impotence in ♂, menstrual irregularities in ♀, and GI effects. […] Eplerenone […] is a mineralocorticoid receptor antagonist without antiandrogen effects and hence greater selectivity and less side effects than spironolactone. *Alternative drugs include the potassium-sparing diuretics amiloride and triamterene.”

“Cushing’s syndrome results from chronic excess cortisol [see also my second post in this series] […] The causes may be classified as ACTH-dependent and ACTH-independent. […] ACTH-independent Cushing’s syndrome […] is due to adrenal tumours (benign and malignant), and is responsible for 10-15% of cases of Cushing’s syndrome. […] Benign adrenocortical adenomas (ACA) are usually encapsulated and <4cm in diameter. They are usually associated with pure glucocorticoid excess. *Adrenocortical carcinomas (ACC) are usually >6cm in diameter, […] and are not infrequently associated with local invasion and metastases at the time of diagnosis. Adrenal carcinomas are characteristically associated with the excess secretion of several hormones; most frequently found is the combination of cortisol and androgen (precursors) […] ACTH-dependent Cushing’s results in bilateral adrenal hyperplasia, thus one has to firmly differentiate between ACTH-dependent and independent causes of Cushing’s before assuming bilateral adrenal hyperplasia as the primary cause of disease. […] It is important to note that, in patients with adrenal carcinoma, there may also be features related to excessive androgen production in ♀ and also a relatively more rapid time course of development of the syndrome. […] Patients with ACTH-independent Cushing’s syndrome do not suppress cortisol […] on high-dose dexamethasone testing and fail to show a rise in cortisol and ACTH following administration of CRH. […] ACTH-independent causes are adrenal in origin, and the mainstay of further investigation is adrenal imaging by CT”.

“Adrenal adenomas, which are successfully treated with surgery, have a good prognosis, and recurrence is unlikely. […] Bilateral adrenalectomy [in the context of bilateral adrenal hyperplasia] is curative. Lifelong glucocorticoid and mineralocorticoid treatment is [however] required. […] The prognosis for adrenal carcinoma is very poor despite surgery. Reports suggest a 5-year survival of 22% and median survival time of 14 months […] Treatment of adrenocortical carcinoma (ACC) should be carried out in a specialist centre, with expert surgeons, oncologists, and endocrinologists with extensive treatment in treating ACC. This improves survival.”

“Adrenal insufficiency [AI, US] is defined by the lack of cortisol, i.e. glucocorticoid deficiency, may be due to destruction of the adrenal cortex (1°, Addison’s disease and congenital adrenal hyperplasia (CAH) […] or due to disordered pituitary and hypothalamic function (2°). […] *Permanent adrenal insufficiency is found in 5 in 10,000 population. *The most frequent cause is hypothalamic-pituitary damage, which is the cause of AI in 60% of affected patients. *The remaining 40% of cases are due to primary failure of the adrenal to synthesize cortisol, almost equal prevalence of Addison’s disease (mostly of autoimmune origin, prevalence 0.9-1.4 in 10,000) and congenital adrenal hyperplasia (0.7-1.0 in 10,000). *2° adrenal insufficiency due to suppression of pituitary-hypothalamic function by exogenously administered, supraphysiological glucocorticoid doses for treatment of, for example, COPD or rheumatoid arthritis, is much more common (50-200 in 10,000 population). However, adrenal function in these patients can recover”.

“[In primary AI] [a]drenal gland destruction or dysfunction occurs due to a disease process which usually involves all three zones of the adrenal cortex, resulting in inadequate glucocorticoid, mineralocorticoid, and adrenal androgen precursor secretion. The manifestations of insufficiency do not usually appear until at least 90% of the gland has been destroyed and are usually gradual in onset […] Acute adrenal insufficiency may occur in the context of acute septicaemia […] Mineralocorticoid deficiency leads to reduced sodium retention and hyponatraemia and hypotension […] Androgen deficiency presents in ♀ with reduced axillary and pubic hair and reduced libido. (Testicular production of androgens is more important in ♂). [In secondary AI] [i]nadequate ACTH results in deficient cortisol production (and ↓ androgens in ♀). […] Mineralocorticoid secretion remains normal […] The onset is usually gradual, with partial ACTH deficiency resulting in reduced response to stress. […] Lack of stimulation of skin MC1R due to ACTH deficiency results in pale skin appearance. […] [In 1° adrenal insufficiency] hyponatraemia is present in 90% and hyperkalaemia in 65%. […] Undetectable serum cortisol is diagnostic […], but the basal cortisol is often in the normal range. A cortisol >550nmol/L precludes the diagnosis. At times of acute stress, an inappropriately low cortisol is very suggestive of the diagnosis.”

“Autoimmune adrenalitis[:] Clinical features[:] *Anorexia and weight loss (>90%). *Tiredness. *Weakness – generalized, no particular muscle groups. […] Dizziness and postural hypotension. *GI symptoms – nausea and vomiting, abdominal pain, diarrhea. *Arthralgia and myalgia. […] *Mediated by humoral and cell-mediated immune mechanisms. Autoimmune insufficiency associated with polyglandular autoimmune syndrome is more common in ♀ (70%). *Adrenal cortex antibodies are present in the majority of patients at diagnosis, and […] they are still found in approximately 70% of patients 10 years later. Up to 20% patients/year with [positive] antibodies develop adrenal insufficiency. […] *Antiadrenal antibodies are found in <2% of patients with other autoimmune endocrine disease (Hashimoto’s thyroiditis, diabetes mellitus, autoimmune hypothyroidism, hypoparathyroidism, pernicious anemia). […] antibodies to other endocrine glands are commonly found in patients with autoimmune adrenal insufficiency […] However, the presence of antibodies does not predict subsequent manifestation of organ-specific autoimmunity. […] Patients with type 1 diabetes mellitus and autoimmune thyroid disease only rarely develop autoimmune adrenal insufficiency. Approximately 60% of patients with Addison’s disease have other autoimmune or endocrine disorders. […] The adrenals are small and atrophic in chronic autoimmune adrenalitis.”

“Autoimmune polyglandular syndrome (APS) type 1[:] *Also known as autoimmune polyendocrinopathy, candidiasis, and ectodermal dystrophy (APECED). […] [C]hildhood onset. *Chronic mucocutaneous candidiasis. *Hypoparathyroidism (90%), 1° adrenal insufficiency (60%). *1° gonadal failure (41%) – usually after Addison’s diagnosis. *1° hypothyroidism. *Rarely hypopituitarism, diabetes insipidus, type 1 diabetes mellitus. […] APS type 2[:] *Adult onset. *Adrenal insufficiency (100%). 1° autoimmune thyroid disease (70%) […] Type 1 diabetes mellitus (5-20%) – often before Addison’s diagnosis. *1° gonadal failure in affected women (5-20%). […] Schmidt’s syndrome: *Addison’s disease, and *Autoimmune hypothyroidism. *Carpenter syndrome: *Addison’s disease, and *Autoimmune hypothyroidism, and/or *Type 1 diabetes mellitus.”

“An adrenal incidentaloma is an adrenal mass that is discovered incidentally upon imaging […] carried out for reasons other than a suspected adrenal pathology.  […] *Autopsy studies suggest incidence prevalence of adrenal masses of 1-6% in the general population. *Imagining studies suggest that adrenal masses are present 2-3% in the general population. Incidence increases with ageing, and 8-10% of 70-year olds harbour an adrenal mass. […] It is important to determine whether the incidentally discovered adrenal mass is: *Malignant. *Functioning and associated with excess hormonal secretion.”

January 17, 2018 Posted by | Books, Cancer/oncology, Diabetes, Epidemiology, Immunology, Medicine, Nephrology, Pharmacology | Leave a comment

Rivers (II)

Some more observations from the book and related links below.

“By almost every measure, the Amazon is the greatest of all the large rivers. Encompassing more than 7 million square kilometres, its drainage basin is the largest in the world and makes up 5% of the global land surface. The river accounts for nearly one-fifth of all the river water discharged into the oceans. The flow is so great that water from the Amazon can still be identified 125 miles out in the Atlantic […] The Amazon has some 1,100 tributaries, and 7 of these are more than 1,600 kilometres long. […] In the lowlands, most Amazonian rivers have extensive floodplains studded with thousands of shallow lakes. Up to one-quarter of the entire Amazon Basin is periodically flooded, and these lakes become progressively connected with each other as the water level rise.”

“To hydrologists, the term ‘flood’ refers to a river’s annual peak discharge period, whether the water inundates the surrounding landscape or not. In more common parlance, however, a flood is synonymous with the river overflowing it’s banks […] Rivers flood in the natural course of events. This often occurs on the floodplain, as the name implies, but flooding can affect almost all of the length of the river. Extreme weather, particularly heavy or protracted rainfall, is the most frequent cause of flooding. The melting of snow and ice is another common cause. […] River floods are one of the most common natural hazards affecting human society, frequently causing social disruption, material damage, and loss of life. […] Most floods have a seasonal element in their occurence […] It is a general rule that the magnitude of a flood is inversely related to its frequency […] Many of the less predictable causes of flooding occur after a valley has been blocked by a natural dam as a result of a landslide, glacier, or lava flow. Natural dams may cause upstream flooding as the blocked river forms a lake and downstream flooding as a result of failure of the dam.”

“The Tigris-Euphrates, Nile, and Indus are all large, exotic river systems, but in other respects they are quite different. The Nile has a relatively gentle gradient in Egypt and a channel that has experienced only small changes over the last few thousand years, by meander cut-off and a minor shift eastwards. The river usually flooded in a regular and predictable way. The stability and long continuity of the Egyptian civilization may be a reflection of its river’s relative stability. The steeper channel of the Indus, by contrast, has experienced major avulsions over great distances on the lower Indus Plain and some very large floods caused by the failure of glacier ice dams in the Himalayan mountains. Likely explanations for the abandonment of many Harappan cities […] take account of damage caused by major floods and/or the disruption caused by channel avulsion leading to a loss of water supply. Channel avulsion was also a problem for the Sumerian civilization on the alluvial plain called Mesopotamia […] known for the rise and fall of its numerous city states. Most of these cities were situated along the Euphrates River, probably because it was more easily controlled for irrigation purposes than the Tigris, which flowed faster and carried much more water. However, the Euphrates was an anastomosing river with multiple channels that diverge and rejoin. Over time, individual branch channels ceased to flow as others formed, and settlements located on these channels inevitably declined and were abandoned as their water supply ran dry, while others expanded as their channels carried greater amounts of water.”

“During the colonization of the Americas in the mid-18th century and the imperial expansion into Africa and Asia in the late 19th century, rivers were commonly used as boundaries because they were the first, and frequently the only, features mapped by European explorers. The diplomats in Europe who negotiated the allocation of colonial territories claimed by rival powers knew little of the places they were carving up. Often, their limited knowledge was based solely on maps that showed few details, rivers being the only distinct physical features marked. Today, many international river boundaries remain as legacies of those historical decisions based on poor geographical knowledge because states have been reluctant to alter their territorial boundaries from original delimitation agreements. […] no less than three-quarters of the world’s international boundaries follow rivers for at least part of their course. […] approximately 60% of the world’s fresh water is drawn from rivers shared by more than one country.”

“The sediments carried in rivers, laid down over many years, represent a record of the changes that have occurred in the drainage basin through the ages. Analysis of these sediments is one way in which physical geographers can interpret the historical development of landscapes. They can study the physical and chemical characteristics of the sediments itself and/or the biological remains they contain, such as pollen or spores. […] The simple rate at which material is deposited by a river can be a good reflection of how conditions have changed in the drainage basin. […] Pollen from surrounding plants is often found in abundance in fluvial sediments, and the analysis of pollen can yield a great deal of information about past conditions in an area. […] Very long sediment cores taken from lakes and swamps enable us to reconstruct changes in vegetation over very long time periods, in some cases over a million years […] Because climate is a strong determinant of vegetation, pollen analysis has also proved to be an important method for tracing changes in past climates.”

“The energy in flowing and falling water has been harnessed to perform work by turning water-wheels for more than 2,000 years. The moving water turns a large wheel and a shaft connected to the wheel axle transmits the power from the water through a system of gears and cogs to work machinery, such as a millstone to grind corn. […] The early medieval watermill was able to do the work of between 30 and 60 people, and by the end of the 10th century in Europe, waterwheels were commonly used in a wide range of industries, including powering forge hammers, oil and silk mills, sugar-cane crushers, ore-crushing mills, breaking up bark in tanning mills, pounding leather, and grinding stones. Nonetheless, most were still used for grinding grains for preparation into various types of food and drink. The Domesday Book, a survey prepared in England in AD 1086, lists 6,082 watermills, although this is probably a conservative estimate because many mills were not recorded in the far north of the country. By 1300, this number had risen to exceed 10,000. [..] Medieval watermills typically powered their wheels by using a dam or weir to concentrate the falling water and pond a reserve supply. These modifications to rivers became increasingly common all over Europe, and by the end of the Middle Ages, in the mid-15th century, watermills were in use on a huge number of rivers and streams. The importance of water power continued into the Industrial Revolution […]. The early textile factories were built to produce cloth using machines driven by waterwheels, so they were often called mills. […] [Today,] about one-third of all countries rely on hydropower for more than half their electricity. Globally, hydropower provides about 20% of the world’s total electricity supply.”

“Deliberate manipulation of river channels through engineering works, including dam construction, diversion, channelization, and culverting, […] has a long history. […] In Europe today, almost 80% of the total discharge of the continent’s major rivers is affected by measures designed to regulate flow, whether for drinking water supply, hydroelectric power generation, flood control, or any other reason. The proportion in individual countries is higher still. About 90% of rivers in the UK are regulated as a result of these activities, while in the Netherlands this percentage is close to 100. By contrast, some of the largest rivers on other continents, including the Amazon and the Congo, are hardly manipulated at all. […] Direct and intentional modifications to rivers are complemented by the impacts of land use and land use changes which frequently result in the alteration of rivers as an unintended side effect. Deforestation, afforestation, land drainage, agriculture, and the use of fire have all had significant impacts, with perhaps the most extreme effects produced by construction activity and urbanization. […] The major methods employed in river regulation are the construction of large dams […], the building of run-of-river impoundments such as weirs and locks, and by channelization, a term that covers a range of river engineering works including widening, deepening, straightening, and the stabilization of banks. […] Many aspects of a dynamic river channel and its associated ecosystems are mutually adjusting, so a human activity in a landscape that affects the supply of water or sediment is likely to set off a complex cascade of other alterations.”

“The methods of storage (in reservoirs) and distribution (by canal) have not changed fundamentally since the earliest river irrigation schemes, with the exception of some contemporary projects’ use of pumps to distribute water over greater distances. Nevertheless, many irrigation canals still harness the force of gravity. Half the world’s large dams (defined as being 15 metres or higher) were built exclusively or primarily for irrigation, and about one-third of the world’s irrigated cropland relies on reservoir water. In several countries, including such populous nations as India and China, more than 50% of arable land is irrigated by river water supplied from dams. […] Sadly, many irrigation schemes are not well managed and a number of environmental problems are frequently experienced as a result, both on-site and off-site. In many large networks of irrigation canals, less than half of the water diverted from a river or reservoir actually benefits crops. A lot of water seeps away through unlined canals or evaporates before reaching the fields. Some also runs off the fields or infiltrates through the soil, unused by plants, because farmers apply too much water or at the wrong time. Much of this water seeps back into nearby streams or joins underground aquifers, so can be used again, but the quality of water may deteriorate if it picks up salts, fertilizers, or pesticides. Excessive applications of irrigation water often result in rising water tables beneath fields, causing salinization and waterlogging. These processes reduce crop yields on irrigation schemes all over the world.”

“[Deforestation can contribute] to the degradation of aquatic habitats in numerous ways. The loss of trees along river banks can result in changes in the species found in the river because fewer trees means a decline in plant matter and insects falling from them, items eaten by some fish. Fewer trees on river banks also results in less shade. More sunlight reaching the river results in warmer water and the enhanced growth of algae. A change in species can occur as fish that feed on falling food are edged out by those able to feed on algae. Deforestation also typically results in more runoff and more soil erosion. This sediment may cover spawning grounds, leading to lower reproduction rates. […] Grazing and trampling by livestock reduces vegetation cover and causes the compaction of soil, which reduces its infiltration capacity. As rainwater passes over or through the soil in areas of intensive agriculture, it picks up residues from pesticides and fertilizers and transport them to rivers. In this way, agriculture has become a leading source of river pollution in certain parts of the world. Concentration of nitrates and phosphates, derived from fertilizers, have risen notably in many rivers in Europe and North America since the 1950s and have led to a range of […] problems encompassed under the term ‘eutrophication’ – the raising of biological productivity caused by nutrient enrichment. […] In slow-moving rivers […] the growth of algae reduces light penetration and depletes the oxygen in the water, sometimes causing fish kills.”

“One of the most profound ways in which people alter rivers is by damming them. Obstructing a river and controlling its flow in this way brings about a raft of changes. A dam traps sediments and nutrients, alters the river’s temperature and chemistry, and affects the processes of erosion and deposition by which the river sculpts the landscape. Dams create more uniform flow in rivers, usually by reducing peak flows and increasing minimum flows. Since the natural variation in flow is important for river ecosystems and their biodiversity, when dams even out flows the result is commonly fewer fish of fewer species. […] the past 50 years or so has seen a marked escalation in the rate and scale of construction of dams all over the world […]. At the beginning of the 21st century, there were about 800,000 dams worldwide […] In some large river systems, the capacity of dams is sufficient to hold more than the entire annual discharge of the river. […] Globally, the world’s major reservoirs are thought to control about 15% of the runoff from the land. The volume of water trapped worldwide in reservoirs of all sizes is no less than five times the total global annual river flow […] Downstream of a reservoir, the hydrological regime of a river is modified. Discharge, velocity, water quality, and thermal characteristics are all affected, leading to changes in the channel and its landscape, plants, and animals, both on the river itself and in deltas, estuaries, and offshore. By slowing the flow of river water, a dam acts as a trap for sediment and hence reduces loads in the river downstream. As a result, the flow downstream of the dam is highly erosive. A relative lack of silt arriving at a river’s delta can result in more coastal erosion and the intrusion of seawater that brings salt into delta ecosystems. […] The dam-barrier effect on migratory fish and their access to spawning grounds has been recognized in Europe since medieval times.”

“One of the most important effects cities have on rivers is the way in which urbanization affects flood runoff. Large areas of cities are typically impermeable, being covered by concrete, stone, tarmac, and bitumen. This tends to increase the amount of runoff produced in urban areas, an effect exacerbated by networks of storm drains and sewers. This water carries relatively little sediment (again, because soil surfaces have been covered by impermeable materials), so when it reaches a river channel it typically causes erosion and widening. Larger and more frequent floods are another outcome of the increase in runoff generated by urban areas. […] It […] seems very likely that efforts to manage the flood hazard on the Mississippi have contributed to an increased risk of damage from tropical storms on the Gulf of Mexico coast. The levées built along the river have contributed to the loss of coastal wetlands, starving them of sediment and fresh water, thereby reducing their dampening effect on storm surge levels. This probably enhanced the damage from Hurricane Katrina which struck the city of New Orleans in 2005.”


Onyx River.
Yangtze. Yangtze floods.
Missoula floods.
Murray River.
Southeastern Anatolia Project.
Water conflict.
Fulling mill.
Maritime transport.
Lock (water navigation).
Yellow River.
Aswan High Dam. Warragamba Dam. Three Gorges Dam.
River restoration.

January 16, 2018 Posted by | Biology, Books, Ecology, Engineering, Geography, Geology, History | Leave a comment

Endocrinology (part 2 – pituitary)

Below I have added some observations from the second chapter of the book, which covers the pituitary gland.

“The pituitary gland is centrally located at the base of the brain in the sella turcica within the sphenoid bone. It is attached to the hypothalamus by the pituitary stalk and a fine vascular network. […] The pituitary measures around 13mm transversely, 9mm anteroposteriorly, and 6mm vertically and weighs approximately 100mg. It increases during pregnancy to almost twice its normal size, and it decreases in the elderly. *Magnetic resonance imaging (MRI) currently provides the optimal imaging of the pituitary gland. *Computed tomography (CT) scans may still be useful in demonstrating calcification in tumours […] and hyperostosis in association with meningiomas or evidence of bone destruction. […] T1– weighted images demonstrate cerebrospinal fluid (CSF) as dark grey and brain as much whiter. This imagining is useful for demonstrating anatomy clearly. […] On T1– weighted images, pituitary adenomas are of lower signal intensity than the remainder of the normal gland. […] The presence of microadenomas may be difficult to demonstrate.”

“Hypopituitarism refers to either partial or complete deficiency of anterior and/or posterior pituitary hormones and may be due to [primary] pituitary disease or to hypothalamic pathology which interferes with the hypothalamic control of the pituitary. Causes: *Pituitary tumours. *Parapituitary tumours […] *Radiotherapy […] *Pituitary infarction (apoplexy), Sheehan’s syndrome. *Infiltration of the pituitary gland […] *infection […] *Trauma […] *Subarachnoid haemorrhage. *Isolated hypothalamic-releasing hormone deficiency, e.g. Kallmann’s syndrome […] *Genetic causes [Let’s stop here: Point is, lots of things can cause pituitary problems…] […] The clinical features depend on the type and degree of hormonal deficits, and the rate of its development, in addition to whether there is intercurrent illness. In the majority of cases, the development of hypopituitarism follows a characteristic order, which secretion of GH [growth hormone, US], then gonadotrophins being affected first, followed by TSH [Thyroid-Stimulating Hormone, US] and ACTH [Adrenocorticotropic Hormone, US] secretion at a later stage. PRL [prolactin, US] deficiency is rare, except in Sheehan’s syndrome associated with failure of lactation. ADH [antidiuretic hormone, US] deficiency is virtually unheard of with pituitary adenomas but may be seen rarely with infiltrative disorders and trauma. The majority of the clinical features are similar to those occurring when there is target gland insufficiency. […] NB Houssay phenomenon. Amelioration of diabetes mellitus in patients with hypopituitarism due to reduction in counter-regulatory hormones. […] The aims of investigation of hypopituitarism are to biochemically assess the extent of pituitary hormone deficiency and also to elucidate the cause. […] Treatment involves adequate and appropriate hormone replacement […] and management of the underlying cause.”

“Apoplexy refers to infarction of the pituitary gland due to either haemorrhage or ischaemia. It occurs most commonly in patients with pituitary adenomas, usually macroadenomas […] It is a medical emergency, and rapid hydrocortisone replacement can be lifesaving. It may present with […] sudden onset headache, vomiting, meningism, visual disturbance, and cranial nerve palsy.”

“Anterior pituitary hormone replacement therapy is usually performed by replacing the target hormone rather than the pituitary or hypothalamic hormone that is actually deficient. The exceptions to this are GH replacement […] and when fertility is desired […] [In the context of thyroid hormone replacement:] In contrast to replacement in [primary] hypothyroidism, the measurement of TSH cannot be used to assess adequacy of replacment in TSH deficiency due to hypothalamo-pituitary disease. Therefore, monitoring of treatment in order to avoid under- and over-replacement should be via both clinical assessment and by measuring free thyroid hormone concentrations […] [In the context of sex hormone replacement:] Oestrogen/testosterone administration is the usual method of replacement, but gonadotrophin therapy is required if fertility is desired […] Patients with ACTH deficiency usually need glucocorticoid replacement only and do not require mineralcorticoids, in contrast to patients with Addison’s disease. […] Monitoring of replacement [is] important to avoid over-replacement which is associated with BP, elevated glucose and insulin, and reduced bone mineral density (BMD). Under-replacement leads to the non-specific symptoms, as seen in Addison’s disease […] Conventional replacement […] may overtreat patients with partial ACTH deficiency.”

“There is now a considerable amount of evidence that there are significant and specific consequences of GH deficiency (GDH) in adults and that many of these features improve with GH replacement therapy. […] It is important to differentiate between adult and childhood onset GDH. […] the commonest cause in childhood is an isolated variable deficiency of GH-releasing hormone (GHRH) which may resolve in adult life […] It is, therefore, important to retest patients with childhood onset GHD when linear growth is completed (50% recovery of this group). Adult onset. GHD usually occurs [secondarily] to a structural pituitary or parapituitary condition or due to the effects of surgical treatment or radiotherapy. Prevalence[:] *Adult onset GHD 1/10,000 *Adult GHD due to adult and childhood onset GHD 3/10,000. Benefits of GH replacement[:] *Improved QoL and psychological well-being. *Improved exercise capacity. *↑ lean body mass and reduced fat mass. *Prolonged GH replacement therapy (>12-24 months) has been shown to increase BMD, which would be expected to reduce fracture rate. *There are, as yet, no outcome studies in terms of cardiovascular mortality. However, GH replacement does lead to a reduction (~15%) in cholesterol. GH replacement also leads to improved ventricular function and ↑ left ventricular mass. […] All patients with GHD should be considered for GH replacement therapy. […] adverse effects experienced with GH replacement usually resolve with dose reduction […] GH treatment may be associated with impairment of insulin sensitivity, and therefore markers of glycemia should be monitored. […] Contraindications to GH replacement[:] *Active malignancy. *Benign intracranial hypertension. *Pre-proliferative/proliferative retinopathy in diabetes mellitus.”

“*Pituitary adenomas are the most common pituitary disease in adults and constitute 10-15% of primary brain tumours. […] *The incidence of clinically apparent pituitary disease is 1 in 10,000. *Pituitary carcinoma is very rare (<0.1% of all tumours) and is most commonly ACTH- or prolactin-secreting. […] *Microadenoma <1cm. *Macroadenoma >1cm. [In terms of the functional status of tumours, the break-down is as follows:] *Prolactinoma 35-40%. *Non-functioning 30-35%. Growth hormone (acromegaly) 10-15%. *ACTH adenoma (Cushing’s disease) 5-10% *TSH adenoma <5%. […] Pituitary disease is associated with an increased mortality, predominantly due to vascular disease. This may be due to oversecretion of GH or ACTH, hormone deficiencies or excessive replacement (e.g. of hydrocortisone).”

“*Prolactinomas are the commonest functioning pituitary tumour. […] Malignant prolactinomas are very rare […] [Clinical features of hyperprolactinaemia:] *Galactorrhoea (up to 90%♀, <10% ♂). *Disturbed gonadal function [menstrual disturbance, infertility, reduced libido, ED in ♂] […] Hyperprolactinaemia is associated with a long-term risk of BMD. […] Hypothyroidism and chronic renal failure are causes of hyperprolactinaemia. […] Antipsychotic agents are the most likely psychotrophic agents to cause hyperprolactinaemia. […] Macroadenomas are space-occupying tumours, often associated with bony erosion and/or cavernous sinus invasion. […] *Invasion of the cavernous sinus may lead to cranial nerve palsies. *Occasionally, very invasive tumours may erode bone and present with a CSF leak or [secondary] meningitis. […] Although microprolactinomas may expand in size without treatment, the vast majority do not. […] Macroprolactinomas, however, will continue to expand and lead to pressure effects. Definite treatment of the tumour is, therefore, necessary.”

“Dopamine agonist treatment […] leads to suppression of PRL in most patients [with prolactinoma], with [secondary] effects of normalization of gonadal function and termination of galactorrhoea. Tumour shrinkage occurs at a variable rate (from 24h to 6-12 months) and extent and must be carefully monitored. Continued shrinkage may occur for years. Slow chiasmal decompression will correct visual field defects in the majority of patients, and immediate surgical decompression is not necessary. […] Cabergoline is more effective in normalization of PRL in microprolactinoma […], with fewer side effects than bromocriptine. […] Tumour enlargement following initial shrinkage on treatment is usually due to non-compliance. […] Since the introduction of dopamine agonist treatment, transsphenoidal surgery is indicated only for patients who are resistant to, or intolerant of, dopamine agonist treatment. The cure rate for macroprolactinomas treated with surgery is poor (30%), and, therefore, drug treatment is first-line in tumours of all size. […] Standard pituitary irradiation leads to slow reduction (over years) of PRL in the majority of patients. […] Radiotherapy is not indicated in the management of patients with microprolactinomas. It is useful in the treatment of macroprolactinomas once the tumour has been shrunken away from the chiasm, only if the tumour is resistant.”

“Acromegaly is the clinical condition resulting from prolonged excessive GH and hence IGF-1 secretion in adults. GH secretion is characterized by blunting of pulsatile secretion and failure of GH to become undetectable during the 24h day, unlike normal controls. […] *Prevalence 40-86 cases/million population. Annual incidence of new cases in the UK is 4/million population. *Onset is insidious, and there is, therefore, often a considerable delay between onset of clinical features and diagnosis. Most cases are diagnosed at 40-60 years. […] Pituitary gigantism [is] [t]he clinical syndrome resulting from excess GH secretion in children prior to fusion of the epiphyses. […] growth velocity without premature pubertal manifestations should arouse suspicion of pituitary gigantism. […] Causes of acromegaly[:] *Pituitary adenoma (>99% of cases). Macroadenomas 60-80%, microadenomas 20-40%. […] The clinical features arise from the effects of excess GH/IGF-1, excess PRL in some (as there is co-secretion of PRL in a minority (30%) of tumours […] and the tumour mass. [Signs and symptoms:] * sweating -> 80% of patients. *Headaches […] *Tiredness and lethargy. *Joint pains. *Change in ring or shoe size. *Facial appearance. Coarse features […] enlarged nose […] prognathism […] interdental separation. […] Enlargement of hands and feet […] [Complications:] *Hypertension (40%). *Insulin resistance and impaired glucose tolerance (40%)/diabetes mellitus (20%). *Obstructive sleep apnea – due to soft tissue swelling […] Ischaemic heart disease and cerebrovascular disease.”

“Management of acromegaly[:] The management strategy depends on the individual patient and also on the tumour size. Lowering of GH is essential in all situations […] Transsphenoidal surgery […] is usually the first line for treatment in most centres. *Reported cure rates vary: 40-91% for microadenomas and 10-48% for macroadenomas, depending on surgical expertise. […] Using the definition of post-operative cure as mean GH <2.5 micrograms/L, the reported recurrence rate is low (6% at 5 years). Radiotherapy […] is usually reserved for patients following unsuccessful transsphenoidal surgery, only occasionally is it used as [primary] therapy. […] normalization of mean GH may take several years and, during this time, adjunctive medical treatment (usually with somatostatin analogues) is required. […] Radiotherapy can induce GH deficiency which may need GH therapy. […] Somatostatin analogues lead to suppresion of GH secretion in 20-60% of patients with acromegaly. […] some patients are partial responders, and although somatostatin analogues will lead to lowering of mean GH, they do not suppress to normal despite dose escalation. These drugs may be used as [primary] therapy where the tumour does not cause mass effects or in patients who have received surgery and/or radiotherapy who have elevated mean GH. […] Dopamine agonists […] lead to lowering of GH levels but, very rarely, lead to normalization of GH or IGF-1 (<30%). They may be helpful, particularly if there is coexistent secretion of PRL, and, in these cases, there may be significant tumour shrinkage. […] GH receptor antagonists [are] [i]ndicated for somatostatin non-responders.”

“Cushing’s syndrome is an illness resulting from excess cortisol secretion, which has a high mortality if left untreated. There are several causes of hypercortisolaemia which must be differentiated, and the commonest cause is iatrogenic (oral, inhaled, or topical steroids). […] ACTH-dependent Cushing’s must be differentiated from ACTH-independent disease (usually due to an adrenal adenoma, or, rarely, carcinoma […]). Once a diagnosis of ACTH-dependent disease has been established, it is important to differentiate between pituitary-dependent (Cushing’s disease) and ectopic secretion. […] [Cushing’s disease is rare;] annual incidence approximately 2/million. The vast majority of Cushing’s syndrome is due to a pituitary ACTH-secreting corticotroph microadenoma. […] The features of Cushing’s syndrome are progressive and may be present for several years prior to diagnosis. […] *Facial appearance – round plethoric complexion, acne and hirsutism, thinning of scalp hair. *Weight gain – truncal obesity, buffalo hump […] *Skin – thin and fragile […] easy bruising […] *Proximal muscle weakness. *Mood disturbance – labile, depression, insomnia, psychosis. *Menstrual disturbance. *Low libido and impotence. […] Associated features [include:] *Hypertension (>50%) due to mineralocorticoid effects of cortisol […] *Impaired glucose tolerance/diabetes mellitus (30%). *Osteopenia and osteoporosis […] *Vascular disease […] *Susceptibility to infections. […] Cushing’s is associated with a hypercoagulable state, with increased cardiovascular thrombotic risks. […] Hypercortisolism suppresses the thyroidal, gonadal, and GH axes, leading to lowered levels of TSH and thyroid hormones as well as reduced gonadotrophins, gonadal steroids, and GH.”

“Treatment of Cushing’s disease[:] Transsphenoidal surgery [is] the first-line option in most cases. […] Pituitary radiotherapy [is] usually administered as second-line treatment, following unsuccessful transsphenoidal surgery. […] Medical treatment [is] indicated during the preoperative preparation of patients or while awaiting radiotherapy to be effective or if surgery or radiotherapy are contraindicated. *Inhibitors of steroidogenesis: metyrapone is usually used first-line, but ketoconazole should be used as first-line in children […] Disadvantage of these agents inhibiting steroidogenesis is the need to increase the dose to maintain control, as ACTH secretion will increase as cortisol concentrations decrease. […] Successful treatment (surgery or radiotherapy) of Cushing’s disease leads to cortisol deficiency and, therefore, glucocorticoid replacement therapy is essential. […] *Untreated [Cushing’s] disease leads to an approximately 30-50% mortality at 5 years, owing to vascular disease and susceptibility to infections. *Treated Cushing’s syndrome has a good prognosis […] *Although the physical features and severe psychological disorders associated with Cushing’s improve or resolve within weeks or months of successful treatment, more subtle mood disturbance may persist for longer. Adults may also have impaired cognitive function. […] it is likely that there is an cardiovascular risk. *Osteoporosis will usually resolve in children but may not improve significantly in older patients. […] *Hypertension has been shown to resolve in 80% and diabetes mellitus in up to 70%. *Recent data suggests that mortality even with successful treatment of Cushing’s is increased significantly.”

“The term incidentaloma refers to an incidentally detected lesion that is unassociated with hormonal hyper- or hyposecretion and has a benign natural history. The increasingly frequent detection of these lesions with technological improvements and more widespread use of sophisticated imaging has led to a management challenge – which, if any, lesions need investigation and/or treatment, and what is the optimal follow-up strategy (if required at all)? […] *Imaging studies using MRI demonstrate pituitary microadenomas in approximately 10% of normal volunteers. […] Clinically significant pituitary tumours are present in about 1 in 1,000 patients. […] Incidentally detected microadenomas are very unlikely (<10%) to increase in size whereas larger incidentally detected meso- and macroadenomas are more likely (40-50%) to enlarge. Thus, conservative management in selected patients may be appropriate for microadenomas which are incidentally detected […]. Macroadenomas should be treated, if possible.”

“Non-functioning pituitary tumours […] are unassociated with clinical syndromes of anterior pituitary hormone excess. […] Non-functioning pituitary tumours (NFA) are the commonest pituitary macroadenoma. They represent around 28% of all pituitary tumours. […] 50% enlarge, if left untreated, at 5 years. […] Tumour behaviour is variable, with some tumours behaving in a very indolent, slow-growing manner and others invading the sphenoid and cavernous sinus. […] At diagnosis, approximately 50% of patients are gonadotrophin-deficient. […] The initial definitive management in virtually every case is surgical. This removes mass effects and may lead to some recovery of pituitary function in around 10%. […] The use of post-operative radiotherapy remains controversial. […] The regrowth rate at 10 years without radiotherapy approaches 45% […] administration of post-operative radiotherapy reduces this regrowth rate to <10%. […] however, there are sequelae to radiotherapy – with a significant long-term risk of hypopituitarism and a possible risk of visual deterioration and malignancy in the field of radiation. […] Unlike the case for GH- and PRL-secreting tumours, medical therapy for NFAs is usually unhelpful […] Gonadotrophinomas […] are tumours that arise from the gonadotroph cells of the pituitary gland and produce FSH, LH, or the α subunit. […] they are usually silent and unassociated with excess detectable secretion of LH and FSH […] [they] present in the same manner as other non-functioning pituitary tumours, with mass effects and hypopituitarism […] These tumours are managed as non-functioning tumours.”

“The posterior lobe of the pituitary gland arises from the forebrain and comprises up to 25% of the normal adult pituitary gland. It produces arginine vasopressin and oxytocin. […] Oxytoxin has no known role in ♂ […] In ♀, oxytoxin contracts the pregnant uterus and also causes breast duct smooth muscle contraction, leading to breast milk ejection during breastfeeding. […] However, oxytoxin deficiency has no known adverse effect on parturition or breastfeeding. […] Arginine vasopressin is the major determinant of renal water excretion and, therefore, fluid balance. It’s main action is to reduce free water clearance. […] Many substances modulate vasopressin secretion, including the catecholamines and opioids. *The main site of action of vasopressin is in the collecting duct and the thick ascending loop of Henle […] Diabetes Insipidus (DI) […] is defined as the passage of large volumes (>3L/24h) of dilute urine (osmolality <300mOsm/kg). [It may be] [d]ue to deficiency of circulating arginine vasopressin [or] [d]ue to renal resistance to vasopressin.” […lots of other causes as well – trauma, tumours, inflammation, infection, vascular, drugs, genetic conditions…]

Hyponatraemia […] Incidence *1-6% of hospital admissions Na<130mmol/L. *15-22% hospital admissions Na<135mmol/L. […] True clinically apparent hyponatraemia is associated with either excess water or salt deficiency. […] Features *Depend on the underlying cause and also on the rate of development of hyponatraemia. May develop once sodium reaches 115mmol/L or earlier if the fall is rapid. Level at 100mmol/L or less is life-threatening. *Features of excess water are mainly neurological because of brain injury […] They include confusion and headache, progressing to seizures and coma. […] SIADH [Syndrome of Inappropriate ADH, US] is a common cause of hyponatraemia. […] The elderly are more prone to SIADH, as they are unable to suppress ADH as efficiently […] ↑ risk of hyponatraemia with SSRIs. […] rapid overcorrection of hyponatraemia may cause central pontine myelinolysis (demyelination).”

“The hypothalamus releases hormones that act as releasing hormones at the anterior pituitary gland. […] The commonest syndrome to be associated with the hypothalamus is abnormal GnRH secretion, leading to reduced gonadotrophin secretion and hypogonadism. Common causes are stress, weight loss, and excessive exercise.”

January 14, 2018 Posted by | Books, Cancer/oncology, Cardiology, Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Ophthalmology, Pharmacology | Leave a comment

Rivers (I)

I gave the book one star on goodreads. My review on goodreads explains why. In this post I’ll disregard the weak parts of the book and only cover ‘the good stuff’. Part of the reason why I gave the book one star instead of two was that I wanted to punish the author for wasting my time with irrelevant stuff when it was clear to me that he could actually have been providing useful information instead; some parts of the book are quite good.

Some quotes and links below.

“[W]ater is continuously on the move, being recycled between the land, oceans, and atmosphere: an eternal succession known as the hydrological cycle. Rivers play a key role in the hydrological cycle, draining water from the land and moving it ultimately to the sea. Any rain or melted snow that doesn’t evaporate or seep into the earth flows downhill over the land surface under the influence of gravity. This flow is channelled by small irregularities in the topography into rivulets that merge to become gullies that feed into larger channels. The flow of rivers is augmented with water flowing through the soil and from underground stores, but a river is more than simply water flowing to the sea. A river also carries rocks and other sediments, dissolved minerals, plants, and animals, both dead and alive. In doing so, rivers transport large amounts of material and provide habitats for a great variety of wildlife. They carve valleys and deposit plains, being largely responsible for shaping the Earth’s continental landscapes. Rivers change progressively over their course from headwaters to mouth, from steep streams that are narrow and turbulent to wider, deeper, often meandering channels. From upstream to downstream, a continuum of change occurs: the volume of water flowing usually increases and coarse sediments grade into finer material. In its upper reaches, a river erodes its bed and banks, but this removal of earth, pebbles, and sometimes boulders gives way to the deposition of material in lower reaches. In tune with these variations in the physical characteristics of the river, changes can also be seen in the types of creatures and plants that make the river their home. […] Rivers interact with the sediments beneath the channel and with the air above. The water flowing in many rivers comes both directly from the air as rainfall – or another form of precipitation – and also from groundwater sources held in rocks and gravels beneath, both being flows of water through the hydrological cycle.”

“One interesting aspect of rivers is that they seem to be organized hierarchically. When viewed from an aircraft or on a map, rivers form distinct networks like the branches of a tree. Small tributary channels join together to form larger channels which in turn merge to form still larger rivers. This progressive increase in river size is often described using a numerical ordering scheme in which the smallest stream is called first order, the union of two first-order channels produces a second-order river, the union of two second-order channels produces a third-order river, and so on. Stream order only increases when two channels of the same rank merge. Very large rivers, such as the Nile and Mississippi, are tenth-order rivers; the Amazon twelfth order. Each river drains an area of land that is proportional to its size. This area is known by several different terms: drainage basin, river basin, or catchment (‘watershed’ is also used in American English, but this word means the drainage divide between two adjacent basins in British English). In the same way that a river network is made up of a hierarchy of low-order rivers nested within higher-order rivers, their drainage basins also fit together to form a nested hierarchy. In other words, smaller units are repeating elements nested within larger units. All of these units are linked by flows of water, sediment, and energy. Recognizing rivers as being made up of a series of units that are arranged hierarchically provides a potent framework in which to study the patterns and processes associated with rivers. […] processes operating at the upper levels of the hierarchy exert considerable influence over features lower down in the hierarchy, but not the other way around. […] Generally, the larger the spatial scale, the slower the processes and rates of change.”

The stuff above incidentally – and curiously – links very closely with the material covered in Holland’s book on complexity, which I finished just the day before I started reading this one. That book has a lot more stuff about things like nested hierarchies and that ‘potent framework’ mentioned above, and how to go about analyzing such things. (I found that book hard to blog – at least at first, which is why I’m right now covering this book instead; but I do hope to get to it later, it was quite interesting).

“Measuring the length of a river is more complicated than it sounds. […] Disagreements about the true source of many rivers have been a continuous feature of [the] history of exploration. […] most rivers typically have many tributaries and hence numerous sources. […] But it gets more confusing. Some rivers do not have a mouth. […] Some rivers have more than one channel. […] Yet another important part of measuring the length of a river is the scale at which it is measured. Fundamentally, the length of a river varies with the map scale because different amounts of detail are generalized at different scales.”

“Two particularly important properties of river flow are velocity and discharge – the volume of water moving past a point over some interval of time […]. A continuous record of discharge plotted against time is called a hydrograph which, depending on the time frame chosen, may give a detailed depiction of a flood event over a few days, or the discharge pattern over a year or more. […] River flow is dependent upon many different factors, including the area and shape of the drainage basin. If all else is equal, larger basins experience larger flows. A river draining a circular basin tends to have a peak in flow because water from all its tributaries arrives at more or less the same time as compared to a river draining a long, narrow basin in which water arrives from tributaries in a more staggered manner. The surface conditions in a basin are also important. Vegetation, for example, intercepts rainfall and hence slows down its movement into rivers. Climate is a particularly significant determinant of river flow. […] All the rivers with the greatest flows are almost entirely located in the humid tropics, where rainfall is abundant throughout the year. […] Rivers in the humid tropics experience relatively constant flows throughout the year, but perennial rivers in more seasonal climates exhibit marked seasonality in flow. […] Some rivers are large enough to flow through more than one climate region. Some desert rivers, for instance, are perennial because they receive most of their flow from high rainfall areas outside the desert. These are known as ‘exotic’ rivers. The Nile is an example […]. These rivers lose large amounts of water – by evaporation and infiltration into soils – while flowing through the desert, but their volumes are such that they maintain their continuity and reach the sea. By contrast, many exotic desert rivers do not flow into the sea but deliver their water to interior basins.”

…and in rare cases, so much water is contributed to the interior basin that that basin’s actually categorized as a ‘sea’. However humans tend to mess such things up. Amu Darya and Syr Darya used to flow into the Aral Sea, until Soviet planners decided they shouldn’t do that anymore. Goodbye Aral Sea – hello Aralkum Desert!

“An important measure of the way a river system moulds its landscape is the ‘drainage density’. This is the sum of the channel length divided by the total area drained, which reflects the spacing of channels. Hence, drainage density expresses the degree to which a river dissects the landscape, effectively controlling the texture of relief. Numerous studies have shown that drainage density has a great range in different regions, depending on conditions of climate, vegetation, and geology particularly. […] Rivers shape the Earth’s continental landscapes in three main ways: by the erosion, transport, and deposition of sediments. These three processes have been used to recognize a simple three-part classification of individual rivers and river networks according to the dominant process in each of three areas: source, transfer, and depositional zones. The first zone consists of the river’s upper reaches, the area from which most of the water and sediment are derived. This is where most of the river’s erosion occurs, and this eroded material is transported through the second zone to be deposited in the third zone. These three zones are idealized because some sediment is eroded, stored, and transported in each of them, but within each zone one process is dominant.”

“The flow of water carries […] sediment in three ways: dissolved material […] moves in solution; small particles are carried in suspension; and larger particles are transported along the stream bed by rolling, sliding, or a bouncing movement known as ‘saltation’. […] Globally, it is estimated that rivers transport around 15 billion tonnes of suspended material annually to the oceans, plus about another 4 billion tonnes of dissolved material. In its upper reaches, a river might flow across bedrock but further downstream this is much less likely. Alluvial rivers are flanked by a floodplain, the channel cut into material that the river itself has transported and deposited. The floodplain is a relatively flat area which is periodically inundated during periods of high flow […] When water spills out onto the floodplain, the velocity of flow decreases and sediment begins to settle, causing fresh deposits of alluvium on the floodplain. Certain patterns of alluvial river channels have been seen on every continent and are divided at the most basic level into straight, meandering, and braided. Straight channels are rare in nature […] The most common river channel pattern is a series of bends known as meanders […]. Meanders develop because erosion becomes concentrated on the outside of a bend and deposition on the inside. As these linked processes continue, the meander bend can become more emphasized, and a particularly sinuous meander may eventually be cut off at its narrow neck, leaving an oxbow lake as evidence of its former course. Alluvial meanders migrate, both down and across their floodplain […]. This lateral migration is an important process in the formation of floodplains. Braided rivers can be recognized by their numerous flows that split off and rejoin each other to give a braided appearance. These multiple intersecting flows are separated by small and often temporary islands of alluvium. Braided rivers typically carry abundant sediment and are found in areas with a fairly steep gradient, often near mountainous regions.”

“The meander cut-off creating an oxbow lake is one way in which a channel makes an abrupt change of course, a characteristic of some alluvial rivers that is generally referred to as ‘avulsion’. It is a natural process by which flow diverts out of an established channel into a new permanent course on the adjacent floodplain, a change in course that can present a major threat to human activities. Rapid, frequent, and often significant avulsions have typified many rivers on the Indo-Gangetic plains of South Asia. In India, the Kosi River has migrated about 100 kilometres westward in the last 200 years […] Why a river suddenly avulses is not understood completely, but earthquakes play a part on the Indo-Gangetic plains. […] Most rivers eventually flow into the sea or a lake, where they deposit sediment which builds up into a landform known as a delta. The name comes from the Greek letter delta, Δ, shaped like a triangle or fan, one of the classic shapes a delta can take. […] Material laid down at the end of a river can continue underwater far beyond the delta as a deep-sea fan.”

“The organisms found in fluvial ecosystems are commonly classified according to the methods they use to gather food and feed. ‘Shredders’ are organisms that consume small sections of leaves; ‘grazers’ and ‘scrapers’ consume algae from the surfaces of objects such as stones and large plants; ‘collectors’ feed on fine organic matter produced by the breakdown of other once-living things; and ‘predators’ eat other living creatures. The relative importance of these groups of creatures typically changes as one moves from the headwaters of a river to stretches further downstream […] small headwater streams are often shaded by overhanging vegetation which limits sunlight and photosynthesis but contributes organic matter by leaf fall. Shredders and collectors typically dominate in these stretches, but further downstream, where the river is wider and thus receives more sunlight and less leaf fall, the situation is quite different. […] There’s no doubting the numerous fundamental ways in which a river’s biology is dependent upon its physical setting, particularly in terms of climate, geology, and topography. Nevertheless, these relationships also work in reverse. The biological components of rivers also act to shape the physical environment, particularly at more local scales. Beavers provide a good illustration of the ways in which the physical structure of rivers can be changed profoundly by large mammals. […] rivers can act both as corridors for species dispersal but also as barriers to the dispersal of organisms.”


Drainage system (geomorphology).
Perennial stream.
Oxbow lake.
Channel River.
Long profile of a river.
Bengal fan.
River continuum concept.
Flood pulse concept.
Riparian zone.


January 11, 2018 Posted by | Books, Ecology, Geography, Geology | Leave a comment

A few diabetes papers of interest

i. Type 2 Diabetes in the Real World: The Elusive Nature of Glycemic Control.

“Despite U.S. Food and Drug Administration (FDA) approval of over 40 new treatment options for type 2 diabetes since 2005, the latest data from the National Health and Nutrition Examination Survey show that the proportion of patients achieving glycated hemoglobin (HbA1c) <7.0% (<53 mmol/mol) remains around 50%, with a negligible decline between the periods 2003–2006 and 2011–2014. The Healthcare Effectiveness Data and Information Set reports even more alarming rates, with only about 40% and 30% of patients achieving HbA1c <7.0% (<53 mmol/mol) in the commercially insured (HMO) and Medicaid populations, respectively, again with virtually no change over the past decade. A recent retrospective cohort study using a large U.S. claims database explored why clinical outcomes are not keeping pace with the availability of new treatment options. The study found that HbA1c reductions fell far short of those reported in randomized clinical trials (RCTs), with poor medication adherence emerging as the key driver behind the disconnect. In this Perspective, we examine the implications of these findings in conjunction with other data to highlight the discrepancy between RCT findings and the real world, all pointing toward the underrealized promise of FDA-approved therapies and the critical importance of medication adherence. While poor medication adherence is not a new issue, it has yet to be effectively addressed in clinical practice — often, we suspect, because it goes unrecognized. To support the busy health care professional, innovative approaches are sorely needed.”

“To better understand the differences between usual care and clinical trial HbA1c results, multivariate regression analysis assessed the relative contributions of key biobehavioral factors, including baseline patient characteristics, drug therapy, and medication adherence (21). Significantly, the key driver was poor medication adherence, accounting for 75% of the gap […]. Adherence was defined […] as the filling of one’s diabetes prescription often enough to cover ≥80% of the time one was recommended to be taking the medication (34). By this metric, proportion of days covered (PDC) ≥80%, only 29% of patients were adherent to GLP-1 RA treatment and 37% to DPP-4 inhibitor treatment. […] These data are consistent with previous real-world studies, which have demonstrated that poor medication adherence to both oral and injectable antidiabetes agents is very common (3537). For example, a retrospective analysis [of] adults initiating oral agents in the DPP-4 inhibitor (n = 61,399), sulfonylurea (n = 134,961), and thiazolidinedione (n = 42,012) classes found that adherence rates, as measured by PDC ≥80% at the 1-year mark after the initial prescription, were below 50% for all three classes, at 47.3%, 41.2%, and 36.7%, respectively (36). Rates dropped even lower at the 2-year follow-up (36)”

“Our current ability to assess adherence and persistence is based primarily on review of pharmacy records, which may underestimate the extent of the problem. For example, using the definition of adherence of the Centers for Medicare & Medicaid Services — PDC ≥80% — a patient could miss up to 20% of days covered and still be considered adherent. In retrospective studies of persistence, the permissible gap after the last expected refill date often extends up to 90 days (39,40). Thus, a patient may have a gap of up to 90 days and still be considered persistent.

Additionally, one must also consider the issue of primary nonadherence; adherence and persistence studies typically only include patients who have completed a first refill. A recent study of e-prescription data among 75,589 insured patients found that nearly one-third of new e-prescriptions for diabetes medications were never filled (41). Finally, none of these measures take into account if the patient is actually ingesting or injecting the medication after acquiring his or her refills.”

“Acknowledging and addressing the problem of poor medication adherence is pivotal because of the well-documented dire consequences: a greater likelihood of long-term complications, more frequent hospitalizations, higher health care costs, and elevated mortality rates (4245). In patients younger than 65, hospitalization risk in one study (n = 137,277) was found to be 30% at the lowest level of adherence to antidiabetes medications (1–19%) versus 13% at the highest adherence quintile (80–100%) […]. In patients over 65, a separate study (n = 123,235) found that all-cause hospitalization risk was 37.4% in adherent cohorts (PDC ≥80%) versus 56.2% in poorly adherent cohorts (PDC <20%) (45). […] Furthermore, for every 1,000 patients who increased adherence to their antidiabetes medications by just 1%, the total medical cost savings was estimated to be $65,464 over 3 years (45). […] “for reasons that are still unclear, the N.A. [North American] patient groups tend to have lower compliance and adherence compared to global rates during large cardiovascular studies” (46,47).”

“There are many potential contributors to poor medication adherence, including depressive affect, negative treatment perceptions, lack of patient-physician trust, complexity of the medication regimen, tolerability, and cost (48). […] A recent review of interventions addressing problematic medication adherence in type 2 diabetes found that few strategies have been shown consistently to have a marked positive impact, particularly with respect to HbA1c lowering, and no single intervention was identified that could be applied successfully to all patients with type 2 diabetes (53). Additional evidence indicates that improvements resulting from the few effective interventions, such as pharmacy-based counseling or nurse-managed home telemonitoring, often wane once the programs end (54,55). We suspect that the efficacy of behavioral interventions to address medication adherence will continue to be limited until there are more focused efforts to address three common and often unappreciated patient obstacles. First, taking diabetes medications is a burdensome and often difficult activity for many of our patients. Rather than just encouraging patients to do a better job of tolerating this burden, more work is needed to make the process easier and more convenient. […] Second, poor medication adherence often represents underlying attitudinal problems that may not be a strictly behavioral issue. Specifically, negative beliefs about prescribed medications are pervasive among patients, and behavioral interventions cannot be effective unless these beliefs are addressed directly (35). […] Third, the issue of access to medications remains a primary concern. A study by Kurlander et al. (51) found that patients selectively forgo medications because of cost; however, noncost factors, such as beliefs, satisfaction with medication-related information, and depression, are also influential.”

ii. Diabetes Research and Care Through the Ages. An overview article which might be of interest especially to people who’re not much familiar with the history of diabetes research and -treatment (a topic which is also very nicely covered in Tattersall’s book). Despite including a historical review of various topics, it also includes many observations about e.g. current (and future?) practice. Some random quotes:

“Arnoldo Cantani established a new strict level of treatment (9). He isolated his patients “under lock and key, and allowed them absolutely no food but lean meat and various fats. In the less severe cases, eggs, liver, and shell-fish were permitted. For drink the patients received water, plain or carbonated, and dilute alcohol for those accustomed to liquors, the total fluid intake being limited to one and one-half to two and one-half liters per day” (6).

Bernhard Naunyn encouraged a strict carbohydrate-free diet (6,10). He locked patients in their rooms for 5 months when necessary for “sugar-freedom” (6).” […let’s just say that treatment options have changed slightly over time – US]

“The characteristics of insulin preparations include the purity of the preparation, the concentration of insulin, the species of origin, and the time course of action (onset, peak, duration) (25). From the 1930s to the early 1950s, one of the major efforts made was to develop an insulin with extended action […]. Most preparations contained 40 (U-40) or 80 (U-80) units of insulin per mL, with U-10 and U-20 eliminated in the early 1940s. U-100 was introduced in 1973 and was meant to be a standard concentration, although U-500 had been available since the early 1950s for special circumstances. Preparations were either of mixed beef and pork origin, pure beef, or pure pork. There were progressive improvements in the purity of preparations as chemical techniques improved. Prior to 1972, conventional preparations contained 8% noninsulin proteins. […] In the early 1980s, “human” insulins were introduced (26). These were made either by recombinant DNA technology in bacteria (Escherichia coli) or yeast (Saccharomyces cerevisiae) or by enzymatic conversion of pork insulin to human insulin, since pork differed by only one amino acid from human insulin. The powerful nature of recombinant DNA technology also led to the development of insulin analogs designed for specific effects. These include rapid-acting insulin analogs and basal insulin analogs.”

“Until 1996, the only oral medications available were biguanides and sulfonylureas. Since that time, there has been an explosion of new classes of oral and parenteral preparations. […] The management of type 2 diabetes (T2D) has undergone rapid change with the introduction of several new classes of glucose-lowering therapies. […] the treatment guidelines are generally clear in the context of using metformin as the first oral medication for T2D and present a menu approach with respect to the second and third glucose-lowering medication (3032). In order to facilitate this decision, the guidelines list the characteristics of each medication including side effects and cost, and the health care provider is expected to make a choice that would be most suited for patient comorbidities and health care circumstances. This can be confusing and contributes to the clinical inertia characteristic of the usual management of T2D (33).”

“Perhaps the most frustrating barrier to optimizing diabetes management is the frequent occurrence of clinical inertia (whenever the health care provider does not initiate or intensify therapy appropriately and in a timely fashion when therapeutic goals are not reached). More broadly, the failure to advance therapy in an appropriate manner can be traced to physician behaviors, patient factors, or elements of the health care system. […] Despite clear evidence from multiple studies, health care providers fail to fully appreciate that T2D is a progressive disease. T2D is associated with ongoing β-cell failure and, as a consequence, we can safely predict that for the majority of patients, glycemic control will deteriorate with time despite metformin therapy (35). Continued observation and reinforcement of the current therapeutic regimen is not likely to be effective. As an example of real-life clinical inertia for patients with T2D on monotherapy metformin and an HbA1c of 7 to <8%, it took on the average 19 months before additional glucose-lowering therapy was introduced (36). The fear of hypoglycemia and weight gain are appropriate concerns for both patient and physician, but with newer therapies these undesirable effects are significantly diminished. In addition, health care providers must appreciate that achieving early and sustained glycemic control has been demonstrated to have long-term benefits […]. Clinicians have been schooled in the notion of a stepwise approach to therapy and are reluctant to initiate combination therapy early in the course of T2D, even if the combination intervention is formulated as a fixed-dose combination. […] monotherapy metformin failure rates with a starting HbA1c >7% are ∼20% per year (35). […] To summarize the current status of T2D at this time, it should be clearly emphasized that, first and foremost, T2D is characterized by a progressive deterioration of glycemic control. A stepwise medication introduction approach results in clinical inertia and frequently fails to meet long-term treatment goals. Early/initial combination therapies that are not associated with hypoglycemia and/or weight gain have been shown to be safe and effective. The added value of reducing CV outcomes with some of these newer medications should elevate them to a more prominent place in the treatment paradigm.”

iii. Use of Adjuvant Pharmacotherapy in Type 1 Diabetes: International Comparison of 49,996 Individuals in the Prospective Diabetes Follow-up and T1D Exchange Registries.

“The majority of those with type 1 diabetes (T1D) have suboptimal glycemic control (14); therefore, use of adjunctive pharmacotherapy to improve control has been of clinical interest. While noninsulin medications approved for type 2 diabetes have been reported in T1D research and clinical practice (5), little is known about their frequency of use. The T1D Exchange (T1DX) registry in the U.S. and the Prospective Diabetes Follow-up (DPV) registry in Germany and Austria are two large consortia of diabetes centers; thus, they provide a rich data set to address this question.

For the analysis, 49,996 pediatric and adult patients with diabetes duration ≥1 year and a registry update from 1 April 2015 to 1 July 2016 were included (19,298 individuals from 73 T1DX sites and 30,698 individuals from 354 DPV sites). Adjuvant medication use (metformin, glucagon-like peptide 1 [GLP-1] receptor agonists, dipeptidyl peptidase 4 [DPP-4] inhibitors, sodium–glucose cotransporter 2 [SGLT2] inhibitors, and other noninsulin diabetes medications including pramlintide) was extracted from participant medical records. […] Adjunctive agents, whose proposed benefits may include the ability to improve glycemic control, reduce insulin doses, promote weight loss, and suppress dysregulated postprandial glucagon secretion, have had little penetrance as part of the daily medical regimen of those in the registries studied. […] The use of any adjuvant medication was 5.4% in T1DX and 1.6% in DPV (P < 0.001). Metformin was the most commonly reported medication in both registries, with 3.5% in the T1DX and 1.3% in the DPV (P < 0.001). […] Use of adjuvant medication was associated with older age, higher BMI, and longer diabetes duration in both registries […] it is important to note that registry data did not capture the intent of adjuvant medications, which may have been to treat polycystic ovarian syndrome in women […here’s a relevant link, US].”

iv. Prevalence of and Risk Factors for Diabetic Peripheral Neuropathy in Youth With Type 1 and Type 2 Diabetes: SEARCH for Diabetes in Youth Study. I recently covered a closely related paper here (paper # 2) but the two papers cover different data sets so I decided it would be worth including this one in this post anyway. Some quotes:

“We previously reported results from a small pilot study comparing the prevalence of DPN in a subset of youth enrolled in the SEARCH for Diabetes in Youth (SEARCH) study and found that 8.5% of 329 youth with T1D (mean ± SD age 15.7 ± 4.3 years and diabetes duration 6.2 ± 0.9 years) and 25.7% of 70 youth with T2D (age 21.6 ± 4.1 years and diabetes duration 7.6 ± 1.8 years) had evidence of DPN (9). […this is the paper I previously covered here, US] Recently, we also reported the prevalence of microvascular and macrovascular complications in youth with T1D and T2D in the entire SEARCH cohort (10).

In the current study, we examined the cross-sectional and longitudinal risk factors for DPN. The aims were 1) to estimate prevalence of DPN in youth with T1D and T2D, overall and by age and diabetes duration, and 2) to identify risk factors (cross-sectional and longitudinal) associated with the presence of DPN in a multiethnic cohort of youth with diabetes enrolled in the SEARCH study.”

“The SEARCH Cohort Study enrolled 2,777 individuals. For this analysis, we excluded participants aged <10 years (n = 134), those with no antibody measures for etiological definition of diabetes (n = 440), and those with incomplete neuropathy assessment […] (n = 213), which reduced the analysis sample size to 1,992 […] There were 1,734 youth with T1D and 258 youth with T2D who participated in the SEARCH study and had complete data for the variables of interest. […] Seven percent of the participants with T1D and 22% of those with T2D had evidence of DPN.”

“Among youth with T1D, those with DPN were older (21 vs. 18 years, P < 0.0001), had a longer duration of diabetes (8.7 vs. 7.8 years, P < 0.0001), and had higher DBP (71 vs. 69 mmHg, P = 0.02), BMI (26 vs. 24 kg/m2, P < 0.001), and LDL-c levels (101 vs. 96 mg/dL, P = 0.01); higher triglycerides (85 vs. 74 mg/dL, P = 0.005); and lower HDL-c levels (51 vs. 55 mg/dL, P = 0.01) compared to those without DPN. The prevalence of DPN was 5% among nonsmokers vs. 10% among the current and former smokers (P = 0.001). […] Among youth with T2D, those with DPN were older (23 vs. 22 years, P = 0.01), had longer duration of diabetes (8.6 vs. 7.6 years; P = 0.002), and had lower HDL-c (40 vs. 43 mg/dL, P = 0.04) compared with those without DPN. The prevalence of DPN was higher among males than among females: 30% of males had DPN compared with 18% of females (P = 0.02). The prevalence of DPN was twofold higher in current smokers (33%) compared with nonsmokers (15%) and former smokers (17%) (P = 0.01). […] [T]he prevalence of DPN was further assessed by 5-year increment of diabetes duration in individuals with T1D or T2D […]. There was an approximately twofold increase in the prevalence of DPN with an increase in duration of diabetes from 5–10 years to >10 years for both the T1D group (5–13%) (P < 0.0001) and the T2D group (19–36%) (P = 0.02). […] in an unadjusted logistic regression model, youth with T2D were four times more likely to develop DPN compared with those with T1D, and though this association was attenuated, it remained significant independent of age, sex, height, and glycemic control (OR 2.99 [1.91; 4.67], P < 0.001)”.

“The prevalence estimates for DPN found in our study for youth with T2D are similar to those in the Australian cohort (8) but lower for youth with T1D than those reported in the Danish (7) and Australian (8) cohorts. The nationwide Danish Study Group for Diabetes in Childhood reported a prevalence of 62% among 339 adolescents and youth with T1D (age 12–27 years, duration 9–25 years, and HbA1c 9.7 ± 1.7%) using the vibration perception threshold to assess DPN (7). The higher prevalence in this cohort compared with ours (62 vs. 7%) could be due to the longer duration of diabetes (9–25 vs. 5–13 years) and reliance on a single measure of neuropathy (vibration perception threshold) as opposed to our use of the MNSI, which includes vibration as well as other indicators of neuropathy. In the Australian study, Eppens et al. (8) reported abnormalities in peripheral nerve function in 27% of the 1,433 adolescents with T1D (median age 15.7 years, median diabetes duration 6.8 years, and mean HbA1c 8.5%) and 21% of the 68 adolescents with T2D (median age 15.3 years, median diabetes duration 1.3 years, and mean HbA1c 7.3%) based on thermal and vibration perception threshold. These data are thus reminiscent of the persistent inconsistencies in the definition of DPN, which are reflected in the wide range of prevalence estimates being reported.”

“The alarming rise in rates of DPN for every 5-year increase in duration, coupled with poor glycemic control and dyslipidemia, in this cohort reinforces the need for clinicians rendering care to youth with diabetes to be vigilant in screening for DPN and identifying any risk factors that could potentially be modified to alter the course of the disease (2830). The modifiable risk factors that could be targeted in this young population include better glycemic control, treatment of dyslipidemia, and smoking cessation (29,30) […]. The sharp increase in rates of DPN over time is a reminder that DPN is one of the complications of diabetes that must be a part of the routine annual screening for youth with diabetes.”

v. Diabetes and Hypertension: A Position Statement by the American Diabetes Association.

“Hypertension is common among patients with diabetes, with the prevalence depending on type and duration of diabetes, age, sex, race/ethnicity, BMI, history of glycemic control, and the presence of kidney disease, among other factors (13). Furthermore, hypertension is a strong risk factor for atherosclerotic cardiovascular disease (ASCVD), heart failure, and microvascular complications. ASCVD — defined as acute coronary syndrome, myocardial infarction (MI), angina, coronary or other arterial revascularization, stroke, transient ischemic attack, or peripheral arterial disease presumed to be of atherosclerotic origin — is the leading cause of morbidity and mortality for individuals with diabetes and is the largest contributor to the direct and indirect costs of diabetes. Numerous studies have shown that antihypertensive therapy reduces ASCVD events, heart failure, and microvascular complications in people with diabetes (48). Large benefits are seen when multiple risk factors are addressed simultaneously (9). There is evidence that ASCVD morbidity and mortality have decreased for people with diabetes since 1990 (10,11) likely due in large part to improvements in blood pressure control (1214). This Position Statement is intended to update the assessment and treatment of hypertension among people with diabetes, including advances in care since the American Diabetes Association (ADA) last published a Position Statement on this topic in 2003 (3).”

“Hypertension is defined as a sustained blood pressure ≥140/90 mmHg. This definition is based on unambiguous data that levels above this threshold are strongly associated with ASCVD, death, disability, and microvascular complications (1,2,2427) and that antihypertensive treatment in populations with baseline blood pressure above this range reduces the risk of ASCVD events (46,28,29). The “sustained” aspect of the hypertension definition is important, as blood pressure has considerable normal variation. The criteria for diagnosing hypertension should be differentiated from blood pressure treatment targets.

Hypertension diagnosis and management can be complicated by two common conditions: masked hypertension and white-coat hypertension. Masked hypertension is defined as a normal blood pressure in the clinic or office (<140/90 mmHg) but an elevated home blood pressure of ≥135/85 mmHg (30); the lower home blood pressure threshold is based on outcome studies (31) demonstrating that lower home blood pressures correspond to higher office-based measurements. White-coat hypertension is elevated office blood pressure (≥140/90 mmHg) and normal (untreated) home blood pressure (<135/85 mmHg) (32). Identifying these conditions with home blood pressure monitoring can help prevent overtreatment of people with white-coat hypertension who are not at elevated risk of ASCVD and, in the case of masked hypertension, allow proper use of medications to reduce side effects during periods of normal pressure (33,34).”

“Diabetic autonomic neuropathy or volume depletion can cause orthostatic hypotension (35), which may be further exacerbated by antihypertensive medications. The definition of orthostatic hypotension is a decrease in systolic blood pressure of 20 mmHg or a decrease in diastolic blood pressure of 10 mmHg within 3 min of standing when compared with blood pressure from the sitting or supine position (36). Orthostatic hypotension is common in people with type 2 diabetes and hypertension and is associated with an increased risk of mortality and heart failure (37).

It is important to assess for symptoms of orthostatic hypotension to individualize blood pressure goals, select the most appropriate antihypertensive agents, and minimize adverse effects of antihypertensive therapy.”

“Taken together, […] meta-analyses consistently show that treating patients with baseline blood pressure ≥140 mmHg to targets <140 mmHg is beneficial, while more intensive targets may offer additional though probably less robust benefits. […] Overall, compared with people without diabetes, the relative benefits of antihypertensive treatment are similar, and absolute benefits may be greater (5,8,40). […] Multiple-drug therapy is often required to achieve blood pressure targets, particularly in the setting of diabetic kidney disease. However, the use of both ACE inhibitors and ARBs in combination is not recommended given the lack of added ASCVD benefit and increased rate of adverse events — namely, hyperkalemia, syncope, and acute kidney injury (7173). Titration of and/or addition of further blood pressure medications should be made in a timely fashion to overcome clinical inertia in achieving blood pressure targets. […] there is an absence of high-quality data available to guide blood pressure targets in type 1 diabetes. […] Of note, diastolic blood pressure, as opposed to systolic blood pressure, is a key variable predicting cardiovascular outcomes in people under age 50 years without diabetes and may be prioritized in younger adults (46,47). Though convincing data are lacking, younger adults with type 1 diabetes might more easily achieve intensive blood pressure levels and may derive substantial long-term benefit from tight blood pressure control.”

“Lifestyle management is an important component of hypertension treatment because it lowers blood pressure, enhances the effectiveness of some antihypertensive medications, promotes other aspects of metabolic and vascular health, and generally leads to few adverse effects. […] Lifestyle therapy consists of reducing excess body weight through caloric restriction, restricting sodium intake (<2,300 mg/day), increasing consumption of fruits and vegetables […] and low-fat dairy products […], avoiding excessive alcohol consumption […] (53), smoking cessation, reducing sedentary time (54), and increasing physical activity levels (55). These lifestyle strategies may also positively affect glycemic and lipid control and should be encouraged in those with even mildly elevated blood pressure.”

“Initial treatment for hypertension should include drug classes demonstrated to reduce cardiovascular events in patients with diabetes: ACE inhibitors (65,66), angiotensin receptor blockers (ARBs) (65,66), thiazide-like diuretics (67), or dihydropyridine CCBs (68). For patients with albuminuria (urine albumin-to-creatinine ratio [UACR] ≥30 mg/g creatinine), initial treatment should include an ACE inhibitor or ARB in order to reduce the risk of progressive kidney disease […]. In the absence of albuminuria, risk of progressive kidney disease is low, and ACE inhibitors and ARBs have not been found to afford superior cardioprotection when compared with other antihypertensive agents (69). β-Blockers may be used for the treatment of coronary disease or heart failure but have not been shown to reduce mortality as blood pressure–lowering agents in the absence of these conditions (5,70).”

vi. High Illicit Drug Abuse and Suicide in Organ Donors With Type 1 Diabetes.

“Organ donors with type 1 diabetes represent a unique population for research. Through a combination of immunological, metabolic, and physiological analyses, researchers utilizing such tissues seek to understand the etiopathogenic events that result in this disorder. The Network for Pancreatic Organ Donors with Diabetes (nPOD) program collects, processes, and distributes pancreata and disease-relevant tissues to investigators throughout the world for this purpose (1). Information is also available, through medical records of organ donors, related to causes of death and psychological factors, including drug use and suicide, that impact life with type 1 diabetes.

We reviewed the terminal hospitalization records for the first 100 organ donors with type 1 diabetes in the nPOD database, noting cause, circumstance, and mechanism of death; laboratory results; and history of illicit drug use. Donors were 45% female and 79% Caucasian. Mean age at time of death was 28 years (range 4–61) with mean disease duration of 16 years (range 0.25–52).”

“Documented suicide was found in 8% of the donors, with an average age at death of 21 years and average diabetes duration of 9 years. […] Similarly, a type 1 diabetes registry from the U.K. found that 6% of subjects’ deaths were attributed to suicide (2). […] Additionally, we observed a high rate of illicit substance abuse: 32% of donors reported or tested positive for illegal substances (excluding marijuana), and multidrug use was common. Cocaine was the most frequently abused substance. Alcohol use was reported in 35% of subjects, with marijuana use in 27%. By comparison, 16% of deaths in the U.K. study were deemed related to drug misuse (2).”

“We fully recognize the implicit biases of an organ donor–based population, which may not be […’may not be’ – well, I guess that’s one way to put it! – US] directly comparable to the general population. Nevertheless, the high rate of suicide and drug use should continue to spur our energy and resources toward caring for the emotional and psychological needs of those living with type 1 diabetes. The burden of type 1 diabetes extends far beyond checking blood glucose and administering insulin.”

January 10, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Pharmacology, Psychiatry, Studies | Leave a comment

Depression (II)

I have added some more quotes from the last half of the book as well as some more links to relevant topics below.

“The early drugs used in psychiatry were sedatives, as calming a patient was probably the only treatment that was feasible and available. Also, it made it easier to manage large numbers of individuals with small numbers of staff at the asylum. Morphine, hyoscine, chloral, and later bromide were all used in this way. […] Insulin coma therapy came into vogue in the 1930s following the work of Manfred Sakel […] Sakel initially proposed this treatment as a cure for schizophrenia, but its use gradually spread to mood disorders to the extent that asylums in Britain opened so-called insulin units. […] Recovery from the coma required administration of glucose, but complications were common and death rates ranged from 1–10 per cent. Insulin coma therapy was initially viewed as having tremendous benefits, but later re-examinations have highlighted that the results could also be explained by a placebo effect associated with the dramatic nature of the process or, tragically, because deprivation of glucose supplies to the brain may have reduced the person’s reactivity because it had induced permanent damage.”

“[S]ome respected scientists and many scientific journals remain ambivalent about the empirical evidence for the benefits of psychological therapies. Part of the reticence appears to result from the lack of very large-scale clinical trials of therapies (compared to international, multi-centre studies of medication). However, a problem for therapy research is that there is no large-scale funding from big business for therapy trials […] It is hard to implement optimum levels of quality control in research studies of therapies. A tablet can have the same ingredients and be prescribed in almost exactly the same way in different treatment centres and different countries. If a patient does not respond to this treatment, the first thing we can do is check if they receive the right medication in the correct dose for a sufficient period of time. This is much more difficult to achieve with psychotherapy and fuels concerns about how therapy is delivered and potential biases related to researcher allegiance (i.e. clinical centres that invent a therapy show better outcomes than those that did not) and generalizability (our ability to replicate the therapy model exactly in a different place with different therapists). […] Overall, the ease of prescribing a tablet, the more traditional evidence-base for the benefits of medication, and the lack of availability of trained therapists in some regions means that therapy still plays second fiddle to medications in the majority of treatment guidelines for depression. […] The mainstay of treatments offered to individuals with depression has changed little in the last thirty to forty years. Antidepressants are the first-line intervention recommended in most clinical guidelines”.

“[W]hilst some cases of mild–moderate depression can benefit from antidepressants (e.g. chronic mild depression of several years’ duration can often respond to medication), it is repeatedly shown that the only group who consistently benefit from antidepressants are those with severe depression. The problem is that in the real world, most antidepressants are actually prescribed for less severe cases, that is, the group least likely to benefit; which is part of the reason why the argument about whether antidepressants work is not going to go away any time soon.”

“The economic argument for therapy can only be sustained if it is shown that the long-term outcome of depression (fewer relapses and better quality of life) is improved by receiving therapy instead of medication or by receiving both therapy and medication. Despite claims about how therapies such as CBT, behavioural activation, IPT, or family therapy may work, the reality is that many of the elements included in these therapies are the same as elements described in all the other effective therapies (sometimes referred to as empirically supported therapies). The shared elements include forming a positive working alliance with the depressed person, sharing the model and the plan for therapy with the patient from day one, and helping the patient engage in active problem-solving, etc. Given the degree of overlap, it is hard to make a real case for using one empirically supported therapy instead of another. Also, there are few predictors (besides symptom severity and personal preference) that consistently show who will respond to one of these therapies rather than to medication. […] One of the reasons for some scepticism about the value of therapies for treating depression is that it has proved difficult to demonstrate exactly what mediates the benefits of these interventions. […] despite the enthusiasm for mindfulness, there were fewer than twenty high-quality research trials on its use in adults with depression by the end of 2015 and most of these studies had fewer than 100 participants. […] exercise improves the symptoms of depression compared to no treatment at all, but the currently available studies on this topic are less than ideal (with many problems in the design of the study or sample of participants included in the clinical trial). […] Exercise is likely to be a better option for those individuals whose mood improves from participating in the experience, rather than someone who is so depressed that they feel further undermined by the process or feel guilty about ‘not trying hard enough’ when they attend the programme.”

“Research […] indicates that treatment is important and a study from the USA in 2005 showed that those who took the prescribed antidepressant medications had a 20 per cent lower rate of absenteeism than those who did not receive treatment for their depression. Absence from work is only one half of the depression–employment equation. In recent times, a new concept ‘presenteeism’ has been introduced to try to describe the problem of individuals who are attending their place of work but have reduced efficiency (usually because their functioning is impaired by illness). As might be imagined, presenteeism is a common issue in depression and a study in the USA in 2007 estimated that a depressed person will lose 5–8 hours of productive work every week because the symptoms they experience directly or indirectly impair their ability to complete work-related tasks. For example, depression was associated with reduced productivity (due to lack of concentration, slowed physical and mental functioning, loss of confidence), and impaired social functioning”.

“Health economists do not usually restrict their estimates of the cost of a disorder simply to the funds needed for treatment (i.e. the direct health and social care costs). A comprehensive economic assessment also takes into account the indirect costs. In depression these will include costs associated with employment issues (e.g. absenteeism and presenteeism; sickness benefits), costs incurred by the patient’s family or significant others (e.g. associated with time away from work to care for someone), and costs arising from premature death such as depression-related suicides (so-called mortality costs). […] Studies from around the world consistently demonstrate that the direct health care costs of depression are dwarfed by the indirect costs. […] Interestingly, absenteeism is usually estimated to be about one-quarter of the costs of presenteeism.”

Jakob Klaesi. António Egas Moniz. Walter Jackson Freeman II.
Electroconvulsive therapy.
Vagal nerve stimulation.
Chlorpromazine. Imipramine. Tricyclic antidepressant. MAOIs. SSRIs. John CadeMogens Schou. Lithium carbonate.
Psychoanalysis. CBT.
Thomas Szasz.
Initial Severity and Antidepressant Benefits: A Meta-Analysis of Data Submitted to the Food and Drug Administration (Kirsch et al.).
Chronobiology. Chronobiotics. Melatonin.
Eric Kandel. BDNF.
The global burden of disease (Murray & Lopez) (the author discusses some of the data included in that publication).

January 8, 2018 Posted by | Books, Health Economics, Medicine, Pharmacology, Psychiatry, Psychology | Leave a comment

Endocrinology (part I – thyroid)

Handbooks like these are difficult to blog, but I decided to try anyway. The first 100 pages or so of the book deals with the thyroid gland. Some observations of interest below.

“Biosynthesis of thyroid hormones requires iodine as substrate. […] The thyroid is the only source of T4. The thyroid secretes 20% of circulating T3; the remainder is generated in extraglandular tissues by the conversion of T4 to T3 […] In the blood, T4 and T3 are almost entirely bound to plasma proteins. […] Only the free or unbound hormone is available to tissues. The metabolic state correlates more closely with the free than the total hormone concentration in the plasma. The relatively weak binding of T3 accounts for its more rapid onset and offset of action. […] The levels of thyroid hormone in the blood are tightly controlled by feedback mechanisms involved in the hypothalamo-pituitary-thyroid (HPT) axis“.

“Annual check of thyroid function [is recommended] in the annual review of diabetic patients.”

“The term thyrotoxicosis denotes the clinical, physiological, and biochemical findings that result when the tissues are exposed to excess thyroid hormone. It can arise in a variety of ways […] It is essential to establish a specific diagnosis […] The term hyperthyroidism should be used to denote only those conditions in which hyperfunction of the thyroid leads to thyrotoxicosis. […] [Thyrotoxicosis is] 10 x more common in ♀ than in ♂ in the UK. Prevalence is approximately 2% of the ♀ population. […] Subclinical hyperthyroidism is defined as low serum thyrotropin (TSH) concentration in patients with normal levels of T4 and T3. Subtle symptoms and signs of thyrotoxicosis may be present. […] There is epidemiological evidence that subclinical hyperthyroidism is a risk factor for the development of atrial fibrillation or osteoporosis.1 Meta-analyses suggest a 41% increase in all-cause mortality.2 […] Thyroid crisis [storm] represents a rare, but life-threatening, exacerbation of the manifestations of thyrotoxicosis. […] the condition is associated with a significant mortality (30-50%, depending on series) […]. Thyroid crisis develops in hyperthyroid patients who: *Have an acute infection. *Undergo thyroidal or non-thyroidal surgery or (rarely) radioiodine treatment.”

“[Symptoms and signs of hyperthyroidism (all forms):] *Hyperactivity, irritability, altered mood, insomnia. *Heat intolerance, sweating. […] *Fatigue, weakness. *Dyspnoea. *Weight loss with appetite (weight gain in 10% of patients). *Pruritus. […] *Thirst and polyuria. *Oligomenorrhoea or amenorrhoea, loss of libido, erectile dysfunction (50% of men may have sexual dysfunction). *Warm, moist skin. […] *Hair loss. *Muscle weakness and wasting. […] Manifestations of Graves’s disease (in addition to [those factors already mentioned include:]) *Diffuse goitre. *Ophthalmopathy […] A feeling of grittiness and discomfort in the eye. *Retrobulbar pressure or pain, eyelid lag or retraction. […] *Exophthalmos (proptosis) […] Optic neuropathy.”

“Two alternative regimens are practiced for Graves’s disease: dose titration and block and replace. […] The [primary] aim [of the dose titration regime] is to achieve a euthyroid state with relatively high drug doses and then to maintain euthyroidism with a low stable dose. […] This regimen has a lower rate of side effects than the block and replace regimen. The treatment is continued for 18 months, as this appears to represent the length of therapy which is generally optimal in producing the remission rate of up to 40% at 5 years after discontinuing therapy. *Relapses are most likely to occur within the first year […] Men have a higher recurrence rate than women. *Patients with multinodular goitres and thyrotoxicosis always relapse on cessation of antithyroid medication, and definite treatment with radioiodine or surgery is usually advised. […] Block and replace regimen *After achieving a euthyroid state on carbimazole alone, carbimazole at a dose of 40mg daily, together with T4 at a dose of 100 micrograms, can be prescribed. This is usually continued for 6 months. *The main advantages are fewer hospital visits for checks of thyroid function and shorter duration of treatment.”

“Radioiodine treatment[:] Indications: *Definite treatment of multinodular goitre or adenoma. *Relapsed Graves’s disease. […] *Radioactive iodine-131 is administered orally as a capsule or a drink. *There is no universal agreement regarding the optimal dose. […] The recommendation is to administer enough radioiodine to achieve euthyroidism, with the acceptance of a moderate rate of hypothyroidism, e.g. 15-20% at 2 years. […] In general, 50-70% of patients have restored normal thyroid function within 6-8 weeks of receiving radioiodine. […] The prevalence of hypothyroidism is about 50% at 10 years and continues to increase thereafter.”

“Thyrotoxicosis occurs in about 0.2% of pregnancies. […] *Diagnosis of thyrotoxicosis during pregnancy may be difficult or delayed. *Physiological changes of pregnancy are similar to those of hyperthyroidism. […] 5-7% of ♀ develop biochemical evidence of thyroid dysfunction after delivery. An incidence is seen in patients with type I diabetes mellitus (25%) […] One-third of affected ♀ with post-partum thyroiditis develop symptoms of hypothyroidism […] There is a suggestion of an risk of post-partum depression in those with hypothyroidism. […] *The use of iodides and radioiodine is contraindicated in pregnancy. *Surgery is rarely performed in pregnancy. It is reserved for patients not responding to ATDs [antithyroid drugs, US]. […] Hyperthyroid ♀ who want to conceive should attain euthyroidism before conception since uncontrolled hyperthyroidism is associated with an an risk of congenital abnormalities (stillbirth and cranial synostosis are the most serious complications).”

“Nodular thyroid disease denotes the presence of single or multiple palpable or non-palpable nodules within the thyroid gland. […] *Clinically apparent thyroid nodules are evident in ~5% of the UK population. […] Thyroid nodules always raise the concern of cancer, but <5% are cancerous. […] clinically detectable thyroid cancer is rare. It accounts for <1% of all cancer and <0.5% of cancer deaths. […] Thyroid cancers are commonest in adults aged 40-50 and rare in children [incidence of 0.2-5 per million per year] and adolescents. […] History should concentrate on: *An enlarging thyroid mass. *A previous history of radiation […] family history of thyroid cancer. *The development of hoarseness or dysphagia. *Nodules are more likely to be malignant in patients <20 or >60 years. *Thyroid nodules are more common in ♀ but more likely to be malignant in ♂. […] Physical findings suggestive of malignancy include a firm or hard, non-tender nodule, a recent history of enlargement, fixation to adjacent tissue, and the presence of regional lymphadenopathy. […] Thyroid nodules may be described as adenomas if the follicular cell differentiation is enclosed within a capsule; adenomatous when the lesions are circumscribed but not encapsulated. *The most common benign thyroid tumours are the nodules of multinodular goitres (colloid nodules) and follicular adenomas. […] Autonomously functioning thyroid adenomas (or nodules) are benign tumours that produce thyroid hormone. Clinically, they present as a single nodule that is hyperfunctioning […], sometimes causing hyperthyroidism.”

“Inflammation of the thyroid gland often leads to a transient thyrotoxicosis followed by hypothyroidism. Overt hypothyroidism caused by autoimmunity has two main forms: Hashimoto’s (goitrous) thyroiditis and atrophic thyroiditis. […] Hashimoto’s thyroiditis [is] [c]haracterized by a painless, variable-sized goitre with rubbery consistency and an irregular surface. […] Occasionally, patients present with thyrotoxicosis in association with a thyroid gland that is unusually firm […] Atrophic thyroiditis [p]robably indicates end-stage thyroid disease. These patients do not have goitre and are antibody [positive]. […] The long-term prognosis of patients with chronic thyroiditis is good because hypothyroidism can easily be corrected with T4 and the goitre is usually not of sufficient size to cause local symptoms. […] there is an association between this condition and thyroid lymphoma (rare, but risk by a factor of 70).”

“Hypothyroidism results from a variety of abnormalities that cause insufficient secretion of thyroid hormones […] The commonest cause is autoimmune thyroid disease. Myxoedema is severe hypothyroidism [which leads to] thickening of the facial features and a doughy induration of the skin. [The clinical picture of hypothyroidism:] *Insidious, non-specific onset. *Fatigue, lethargy, constipation, cold intolerance, muscle stiffness, cramps, carpal tunnel syndrome […] *Slowing of intellectual and motor activities. *↓ appetite and weight gain. *Dry skin; hair loss. […] [The term] [s]ubclinical hypothyroidism […] is used to denote raised TSH levels in the presence of normal concentrations of free thyroid hormones. *Treatment is indicated if the biochemistry is sustained in patients with a past history of radioiodine treatment for thyrotoxicosis or [positive] thyroid antibodies as, in these situations, progression to overt hypothyroidism is almost inevitable […] There is controversy over the advantages of T4 treatment in patients with [negative] thyroid antibodies and no previous radioiodine treatment. *If treatment is not given, follow-up with annual thyroid function tests is important. *There is no generally accepted consensus of when patients should receive treatment. […] *Thyroid hormone replacement with synthetic levothyroxine remains the treatment of choice in primary hypothyroidism. […] levothyroxine has a narrow therapeutic index […] Elevated TSH despite thyroxine replacement is common, most usually due to lack of compliance.”


January 8, 2018 Posted by | Books, Cancer/oncology, Diabetes, Medicine, Ophthalmology, Pharmacology | Leave a comment

Depression (I)

Below I have added some quotes and links related to the first half of this book.


“One of the problems encountered in any discussion of depression is that the word is used to mean different things by different people. For many members of the public, the term depression is used to describe normal sadness. In clinical practice, the term depression can be used to describe negative mood states, which are symptoms that can occur in a range of illnesses (e.g. individuals with psychosis may also report depressed mood). However, the term depression can also be used to refer to a diagnosis. When employed in this way it is meant to indicate that a cluster of symptoms have all occurred together, with the most common changes being in mood, thoughts, feelings, and behaviours. Theoretically, all these symptoms need to be present to make a diagnosis of depressive disorder.”

“The absence of any laboratory tests in psychiatry means that the diagnosis of depression relies on clinical judgement and the recognition of patterns of symptoms. There are two main problems with this. First, the diagnosis represents an attempt to impose a ‘present/absent’ or ‘yes/no’ classification on a problem that, in reality, is dimensional and varies in duration and severity. Also, many symptoms are likely to show some degree of overlap with pre-existing personality traits. Taken together, this means there is an ongoing concern about the point at which depression or depressive symptoms should be regarded as a mental disorder, that is, where to situate the dividing line on a continuum from health to normal sadness to illness. Second, for many years, there was a lack of consistent agreement on what combination of symptoms and impaired functioning would benefit from clinical intervention. This lack of consensus on the threshold for treatment, or for deciding which treatment to use, is a major source of problems to this day. […] A careful inspection of the criteria for identifying a depressive disorder demonstrates that diagnosis is mainly reliant on the cross-sectional assessment of the way the person presents at that moment in time. It is also emphasized that the current presentation should represent a change from the person’s usual state, as this step helps to begin the process of differentiating illness episodes from long-standing personality traits. Clarifying the longitudinal history of any lifetime problems can help also to establish, for example, whether the person has previously experienced mania (in which case their diagnosis will be revised to bipolar disorder), or whether they have a history of chronic depression, with persistent symptoms that may be less severe but are nevertheless very debilitating (this is usually called dysthymia). In addition, it is important to assess whether the person has another mental or physical disorder as well as these may frequently co-occur with depression. […] In the absence of diagnostic tests, the current classifications still rely on expert consensus regarding symptom profiles.”

“In summary, for a classification system to have utility it needs to be reliable and valid. If a diagnosis is reliable doctors will all make the same diagnosis when they interview patients who present with the same set of symptoms. If a diagnosis has predictive validity it means that it is possible to forecast the future course of the illness in individuals with the same diagnosis and to anticipate their likely response to different treatments. For many decades, the lack of reliability so undermined the credibility of psychiatric diagnoses that most of the revisions of the classification systems between the 1950s and 2010 focused on improving diagnostic reliability. However, insufficient attention has been given to validity and until this is improved, the criteria used for diagnosing depressive disorders will continue to be regarded as somewhat arbitrary […]. Weaknesses in the systems for the diagnosis and classification of depression are frequently raised in discussions about the existence of depression as a separate entity and concerns about the rationale for treatment. It is notable that general medicine uses a similar approach to making decisions regarding the health–illness dimension. For example, levels of blood pressure exist on a continuum. However, when an individual’s blood pressure measurement reaches a predefined level, it is reported that the person now meets the criteria specified for the diagnosis of hypertension (high blood pressure). Depending on the degree of variation from the norm or average values for their age and gender, the person will be offered different interventions. […] This approach is widely accepted as a rational approach to managing this common physical health problem, yet a similar ‘stepped care’ approach to depression is often derided.”

“There are few differences in the nature of the symptoms experienced by men and women who are depressed, but there may be gender differences in how their distress is expressed or how they react to the symptoms. For example, men may be more likely to become withdrawn rather than to seek support from or confide in other people, they may become more outwardly hostile and have a greater tendency to use alcohol to try to cope with their symptoms. It is also clear that it may be more difficult for men to accept that they have a mental health problem and they are more likely to deny it, delay seeking help, or even to refuse help. […] becoming unemployed, retirement, and loss of a partner and change of social roles can all be risk factors for depression in men. In addition, chronic physical health problems or increasing disability may also act as a precipitant. The relationship between physical illness and depression is complex. When people are depressed they may subjectively report that their general health is worse than that of other people; likewise, people who are ill or in pain may react by becoming depressed. Certain medical problems such as an under-functioning thyroid gland (hypothyroidism) may produce symptoms that are virtually indistinguishable from depression. Overall, the rate of depression in individuals with a chronic physical disease is almost three times higher than those without such problems.”

“A long-standing problem in gathering data about suicide is that many religions and cultures regard it as a sin or an illegal act. This has had several consequences. For example, coroners and other public officials often strive to avoid identifying suspicious deaths as a suicide, meaning that the actual rates of suicide may be under-reported.”

“In Beck’s [depression] model, it is proposed that an individual’s interpretations of events or experiences are encapsulated in automatic thoughts, which arise immediately following the event or even at the same time. […] Beck suggested that these automatic thoughts occur at a conscious level and can be accessible to the individual, although they may not be actively aware of them because they are not concentrating on them. The appraisals that occur in specific situations largely determine the person’s emotional and behavioural responses […] [I]n depression, the content of a person’s thinking is dominated by negative views of themselves, their world, and their future (the so-called negative cognitive triad). Beck’s theory suggests that the themes included in the automatic thoughts are generated via the activation of underlying cognitive structures, called dysfunctional beliefs (or cognitive schemata). All individuals develop a set of rules or ‘silent assumptions’ derived from early learning experiences. Whilst automatic thoughts are momentary, event-specific cognitions, the underlying beliefs operate across a variety of situations and are more permanent. Most of the underlying beliefs held by the average individual are quite adaptive and guide our attempts to act and react in a considered way. Individuals at risk of depression are hypothesized to hold beliefs that are maladaptive and can have an unhelpful influence on them. […] faulty information processing contributes to further deterioration in a person’s mood, which sets up a vicious cycle with more negative mood increasing the risk of negative interpretations of day-to-day life experiences and these negative cognitions worsening the depressed mood. Beck suggested that the underlying beliefs that render an individual vulnerable to depression may be broadly categorized into beliefs about being helpless or unlovable. […] Beliefs about ‘the self’ seem especially important in the maintenance of depression, particularly when connected with low or variable self-esteem.”

“[U]nidimensional models, such as the monoamine hypothesis or the social origins of depression model, are important building blocks for understanding depression. However, in reality there is no one cause and no single pathway to depression and […] multiple factors increase vulnerability to depression. Whether or not someone at risk of depression actually develops the disorder is partly dictated by whether they are exposed to certain types of life events, the perceived level of threat or distress associated with those events (which in turn is influenced by cognitive and emotional reactions and temperament), their ability to cope with these experiences (their resilience or adaptability under stress), and the functioning of their biological stress-sensitivity systems (including the thresholds for switching on their body’s stress responses).”

Some links:

Humorism. Marsilio Ficino. Thomas Willis. William Cullen. Philippe Pinel. Benjamin Rush. Emil Kraepelin. Karl Leonhard. Sigmund Freud.
Relation between depression and sociodemographic factors.
Bipolar disorder.
Postnatal depression. Postpartum psychosis.
Epidemiology of suicide. Durkheim’s typology of suicide.
Suicide methods.
Neuroendocrine hypothesis of depression. HPA (Hypothalamic–Pituitary–Adrenal) axis.
Cognitive behavioral therapy.
Coping responses.
Brown & Harris (1978).

January 5, 2018 Posted by | Books, Medicine, Psychiatry, Psychology | Leave a comment

Books 2017

Here’s a goodreads overview of the books I read, with cover images of the books.

Below the comments here I’ve added a list of books I’ve read in 2017, as well as relevant links to blog posts and reviews. The letters ‘f’, ‘nf.’ and ‘m’ in the parentheses indicate which type of book it was; ‘f’ refers to ‘fiction’ books, ‘nf’ to ‘non-fiction’ books, and the ‘m’ category covers ‘miscellaneous’ books. The numbers in the parentheses correspond to the goodreads ratings I thought the books deserved.

I read 162 books in 2017. In terms of the typology mentioned above I read 108 fiction books, 46 non-fiction books, and 8 ‘miscellaneous’ books. These categories are slightly arbitrary, and especially the distinction between ‘miscellaneous’ and ‘fiction’ was occasionally difficult to deal with in an ideal manner; I have as a rule included all books which combine fiction and history in the fiction category, but I think it would be fair to say that ‘some books in the fiction category are more fictitious than others’. Many of the Flashman novels contain a lot of (true) history about the events taking place, and I’ve previously seen some people in all seriousness recommend these books to people who requested books about a specific historical topic (for example Flashman and the Dragon is a brilliant book to have a look at if you’re curious to know more about the Taiping Rebellion; and relatedly if you’re interested in naval warfare during the Napoleonic Wars, I believe you could do worse than having a go at the Aubrey-Maturin series).

I’ll probably continue to update this post for some time into 2018, as I’d like to provide a bit more coverage of some of the books than I already have. The current list, as I’m writing this post, includes 62 direct links to blog posts with coverage of non-fiction books, as well as links to 49 reviews on goodreads. It’s perhaps worth mentioning here that the links included in the list below are not the only ‘book-relevant posts’ I’ve written this year; a total count of the blog posts I’ve posted this year and categorized under the blog’s book category include almost 100 (94) book-related posts; a substantial proportion of the remainder of the book posts were posts in which I’d included quotes from fiction books as well as the new language/words posts in which I include new words I encounter while reading (mostly fiction-) books, in order to improve my vocabulary. Many of the non-book posts I published this year were posts covering scientific studies and lectures; I covered 26 lectures in 2017, i.e. one every fortnight on average, and 19 of the other non-lecture/non-book-related posts provided coverage of various studies. Book-related posts make up more than half of the posts I posted in 2017; the total number of posts I posted in 2017 was 177, which is quite close to one post every second day on average.

I have tried throughout the year to provide at least some coverage of the great majority of the non-fiction books I read; my intention from the start was to either blog a non-fiction book, or to add a review on goodreads about the book – unless there was some compelling reason not to review or blog the book. That’s how it’s played out. The few non-fiction books I have not (…yet?) either blogged or reviewed are 3 ‘paper books’ (Boyd & Richerson, Tainter, Browning). I have talked about how ‘paper books’ take a lot more effort to blog than do electronic books before here on the blog – see e.g. the comments included in this post.

As a rule my goodreads reviews are ‘low effort’ reviews (‘minutes’) and my blog posts are ‘high effort’ (‘hours’); there are probably a couple of exceptions below where I’ve actually spent some time on a goodreads review, but that’s the exception, not the rule.

The aforementioned goodreads overview includes the observation that I read 45.406 pages in 2017. There’s always some overlap when it comes to these things (books you start out reading one year and then only finish the next year), and some measurement error, but I don’t have any better data than what is provided to me by goodreads, so it is what it is. It corresponds to ~125 pages/day throughout the year. This is slightly less than last year (47.281 pages, ~130 pages/day). The average page count of the books I read was ~280 pages, and my average goodreads rating of the books I rated was 3.3.

Some of the fiction authors I read this year include: Rex Stout (51 books), Jim Butcher (15), George MacDonald Fraser (12 books), Ernest Bramah (6 books), Connie Willis (5 books), and James Herriot (5).

I usually like to explore how the non-fiction books I read were categorized, as it tells you something about which kinds of things I’ve read about throughout the year. One major change from earlier years is that I’ve been reading a lot more physics- and chemistry-related books than I have in previous years. This year I posted 22 posts which were categorized under both the category books and the category physics, and I posted 18 posts which included both the category books and the chemistry category; note that I’ve also posted non-book related posts on these topics, for example the total number of posts I’ve posted this year categorized under ‘physics’ is 30, as I’ve e.g. also covered some lectures from the Institute for Advanced Studies. In terms of the number of pages I read about these topics a brief count told me that I read roughly ~3000 book-pages of physics (2937) and maybe ~1700 book-pages of chemistry in 2017 (categorization here’s a bit iffy (see also this), and the page count depends a bit on which books you decide to include in the count, but it’s in that neighbourhood anyway – do incidentally note that there’s a substantial amount of overlap here). On the other hand I’ve read fewer medical textbooks than usual, even if my coverage of medical topics on the blog has not decreased substantially (it may in fact have increased, though I’m hesitant to spend time trying to clarify this); I thus posted a total of 64 posts in 2017 which I categorized under ‘medicine‘, and during the second half of the year alone I covered a total of 88 papers (but only 3 textbooks…) on the topic of diabetes.

I’ve listed the books in the order they were read.

1. Brief Candles (3, f). Manning Coles.

2. Galaxies: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.

3. Mirabile (2, f). Janet Kagan. Short goodreads review here.

4. Blackout (5, f). Connie Willis. Goodreads review here (note that this review is a ‘composite review’ of both Blackout and All Clear).

5. All Clear (5, f). Connie Willis.

6. The Laws of Thermodynamics: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.

7. A Knight of the Seven Kingdoms (3, f). George R. R. Martin. Goodreads review here.

8. The Economics of International Immigration (1, nf. Springer). Goodreads review here.

9. American Gods (2, f). Neil Gaiman. Short goodreads review here – I was not impressed.

10. The Story of the Stone (3, f). Barry Hughart. Goodreads review here.

11. Particle Physics: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here.

12. The Wallet of Kai Lung (4, f). Ernest Bramah. Goodreads review here.

13. Kai Lung’s Golden Hours (4, f). Ernest Bramah.

14. Kai Lung Unrolls His Mat (4, f). Ernest Bramah. Goodreads review here.

15. Anaesthesia: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here.

16. The Moon of Much Gladness (5, f). Ernest Bramah. Goodreads review here.

17. All Trivia – A collection of reflections & aphorisms (2, m). Logan Pearsall Smith. Short goodreads review here.

18. Rocks: A very short introduction (3, nf. Oxford University Press). Blog coverage here.

19. Kai Lung Beneath the Mulberry-Tree (4, f). Ernest Bramah.

20. Economic Analysis in Healthcare (2, nf. Wiley). Blog coverage here and here.

21. The Best of Connie Willis: Award-Winning Stories (f). Connie Willis. Goodreads review here.

22. The Winds of Marble Arch and Other Stories (f). Connie Willis. Many of the comments that applied to the book above (see my review link) also applies here (in part because a substantial number of stories are in fact included in both books).

23. Endgame (f.). Samuel Beckett. Short goodreads review here.

24. Kai Lung Raises His Voice (4, f). Ernest Bramah. Goodreads review here.

25. All Creatures Great and Small (5, m). James Herriot. Goodreads review here. I added this book to my list of favourite books on goodreads.

26. The Red House Mystery (4, f). A. A. Milne. Short goodreads review here.

27. All Things Bright and Beautiful (5, m). James Herriot. Short goodreads review here.

28. All Things Wise and Wonderful (4, m). James Herriot. Goodreads review here.

29. The Lord God Made Them All (4, m). James Herriot.

30. Every Living Thing (5, m). James Herriot. Goodreads review here.

31. The Faber Book Of Aphorisms (3, m). W. H. Auden and Louis Kronenberger. Goodreads review here.

32. Flashman (5, f). George MacDonald Fraser. Short goodreads review here.

33. Royal Flash (4, f). George MacDonald Fraser.

34. Flashman’s Lady (3, f). George MacDonald Fraser. Goodreads review here.

35. Flashman and the Mountain of Light (5, f). George MacDonald Fraser. Short goodreads review here.

36. Flash for Freedom! (3, f). George MacDonald Fraser.

37. Flashman and the Redskins (4, f). George MacDonald Fraser.

38. Biodemography of Aging: Determinants of Healthy Life Span and Longevity (5, nf. Springer). Long, takes a lot of work. I added this book to my list of favorite books on goodreads. Blog coverage here, here, here, and here.

39. Flashman at the Charge (4, f). George MacDonald Fraser.

40. Flashman in the Great Game (3, f). George MacDonald Fraser.

41. Nuclear Physics: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.

42. Fer-de-Lance (4, f). Rex Stout.

43. Computer Science: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

44. The League of Frightened Men (4, f). Rex Stout.

45. Not by Genes Alone: How Culture Transformed Human Evolution (5, nf. University Of Chicago Press). I added this book to my list of favorite books on goodreads.

46. The Rubber Band (3, f). Rex Stout.

47. The Red Box (3, f). Rex Stout.

48. Too many Cooks (3, f). Rex Stout.

49. Some Buried Caesar (4, f). Rex Stout.

50. Over My Dead Body (3, f). Rex Stout.

51. The Education of Man (1, m). Heinrich Pestalozzi. Short goodreads review here. I included some quotes from the book in this post.

52. Where There’s a Will (3, f). Rex Stout.

53. Black Orchids (3, f). Rex Stout. Goodreads review here.

54. Not Quite Dead Enough (5, f). Rex Stout. Goodreads review here.

55. The Silent Speaker (4, f). Rex Stout.

56. Astrophysics: A Very Short Introduction (2, nf. Oxford University Press). Goodreads review here. Blog coverage here.

57. Too Many Women (4, f). Rex Stout.

58. And Be a Villain (3, f). Rex Stout.

59. Trouble in Triplicate (2, f). Rex Stout. Goodreads review here.

60. The Antarctic: A Very Short Introduction (1, nf. Oxford University Press). Short goodreads review here. Blog coverage here.

61. The Second Confession (3, f). Rex Stout.

62. Three Doors to Death (3, f). Rex Stout. Very short goodreads review here.

63. In the Best Families (4, f). Rex Stout. Goodreads review here.

64. Stars: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here.

65. Curtains for Three (4, f). Rex Stout. Very short goodreads review here.

66. Murder by the Book (4, f). Rex Stout.

67. Triple Jeopardy (4, f). Rex Stout. Very short goodreads review here.

68. The Personality Puzzle (1, nf. W. W. Norton & Company). Long, but poor. Blog coverage here, here, here, and here.

69. Prisoner’s Base (4, f). Rex Stout.

70. The Golden Spiders (3, f). Rex Stout.

71. Three Men Out (3, f). Rex Stout.

72. The Black Mountain (4, f). Rex Stout. Short goodreads review here.

73. Beyond Significance Testing: Statistics Reform in the Behavioral Sciences (4, nf. American Psychological Association). Blog coverage here, here, here, here, and here.

74. Before Midnight (3, f). Rex Stout.

75. How Species Interact: Altering the Standard View on Trophic Ecology (4, nf. Oxford University Press). Blog coverage here.

76. Three Witnesses (4, f). Rex Stout.

77. Might As Well Be Dead (4, f). Rex Stout.

78. Gravity: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

79. Three for the Chair (3, f). Rex Stout.

80. If Death Ever Slept (3, f). Rex Stout.

81. And Four to Go (3, f). Rex Stout.

82. Champagne for One (4, f). Rex Stout.

83. Plot It Yourself (5, f). Rex Stout. Short goodreads review here.

84. Three At Wolfe’s Door (3, f). Rex Stout.

85. Too Many Clients (4, f). Rex Stout.

86. First Farmers: The Origins of Agricultural Societies (5, nf. Blackwell Publishing). I added this book to my list of favorite books on goodreads. Blog coverage here.

87. The Final Deduction (4, f). Rex Stout.

88. Homicide Trinity (4, f). Rex Stout.

89. Gambit (5, f). Rex Stout. Very short goodreads review here.

90. The Mother Hunt (3, f). Rex Stout.

91. Trio for Blunt Instruments (3, f). Rex Stout.

92. A Right to Die (2, f). Rex Stout.

93. Concepts and Methods in Infectious Disease Surveillance (2, nf. Wiley-Blackwell). Blog coverage here, here, here, and here.

94. The Doorbell Rang (5, f). Rex Stout.

95. Death of a Doxy (4, f). Rex Stout.

96. The Father Hunt (3, f). Rex Stout. Short goodreads review here.

97. Death of a Dude (3, f). Rex Stout.

98. Gastrointestinal Function in Diabetes Mellitus (5, nf. John Wiley & Sons). Short goodreads review here. I added this book to my list of favorite books on goodreads. This post included a few observations from the book. I also covered the book here, here, and here.

99. Please Pass the Guilt (2, f). Rex Stout.

100. Depression and Heart Disease (4, nf. John Wiley & Sons). Blog coverage here and here.

101. A Family Affair (4, f). Rex Stout.

102. The Sound of Murder (2, f). Rex Stout.

103. The Broken Vase (f). Rex Stout. (Forgot to add/rate this one on goodreads shortly after I’d read it and I only noticed that I’d forgot to add the book much later – so I decided not to rate it).

104. Flashman and the Angel of the Lord (4, f). George MacDonald Fraser.

105. The Collapse of Complex Societies (1, nf. Cambridge University Press).

106. Flashman and the Dragon (5, f). George MacDonald Fraser.

107. Magnetism: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

108. Flashman on the March (3, f). George MacDonald Fraser.

109. Flashman and the Tiger (3, f). George MacDonald Fraser. Goodreads review here.

110. Light: A Very Short Introduction (2. nf. Oxford University Press). Blog coverage here.

111. Double for Death (3, f). Rex Stout. Goodreads review here.

112. Red Threads (2, f). Rex Stout.

113. The Fall of Rome And the End of Civilization (5, nf. Oxford University Press). Blog coverage here.

114. Sound: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

115. The Mountain Cat Murders (2, f). Rex Stout.

116. Storm Front (4, f). Jim Butcher.

117. Fool Moon (3, f) Jim Butcher.

118. Grave Peril (4, f). Jim Butcher.

119. Summer Knight (4, f). Jim Butcher.

120. The History of Astronomy: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

121. Death Masks (4, f). Jim Butcher.

122. Blood Rites (4, f). Jim Butcher.

123. Dead Beat (4, f). Jim Butcher.

124. Proven Guilty (5, f). Jim Butcher.

125. Earth System Science: A Very Short Introduction (nf. Oxford University Press). Blog coverage here.

126. White Night (3, f). Jim Butcher.

127. Small Favor (3, f). Jim Butcher.

128. Turn Coat (4, f). Jim Butcher.

129. Physical Chemistry: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.

130. Changes (3, f). Jim Butcher.

131. Ghost Story (4, f). Jim Butcher. Very short goodreads review here.

132. Cold Days (4, f). Jim Butcher.

133. Child Psychology: A Very Short Introduction (1, nf. Oxford University Press). Very short goodreads review here. Blog coverage here.

134. Skin Game (3, f). Jim Butcher.

135. Animal Farm (3, f). George Orwell. Goodreads review here.

136. Bellwether (3, f). Connie Willis.

137. Enter the Saint (1, f). Leslie Charteris.

138. Organic Chemistry: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here and here.

139. The Shadow of the Torturer (2, f). Gene Wolfe.

140. Molecules: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

141. The Claw of the Conciliator (1, f). Gene Wolfe. Goodreads review here.

142. Common Errors in Statistics (4, nf. John Wiley & Sons). Blog coverage here, here, and here.

143. Master and Commander (3, f). Patrick O’Brian.

144. Materials: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here and here.

145. Post Captain (3, f). Patrick O’Brian.

146. Isotopes: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here.

147. Radioactivity: A Very Short Introduction (2, nf. Oxford University Press). Short goodreads review here. Blog coverage here.

148. Current Topics in Occupational Epidemiology (4, nf. Oxford University Press). Short goodreads review here. Blog coverage here, here, and here.

149. HMS Surprise (3, f). Patrick O’Brian.

150. Never Let Me Go (5, f). Kazuo Ishiguro. I added this book to my list of favorite books on goodreads.

151. Nuclear Power: A Very Short Introduction (2, nf. Oxford University Press). Short goodreads review here. Blog coverage here and here.

152. Nuclear Weapons: A Very Short Introduction (1, nf. Oxford University Press). Goodreads review here.

153. Assassin’s Apprentice (4, f). Robin Hobb.

154. Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland (4, nf. Harper Perennial).

155. Plate Tectonics: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here and here.

156. The Mauritius Command (2, f). Patrick O’Brian.

157. Royal Assassin (3, f). Robin Hobb.

158. The Periodic Table: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

159. Civil Engineering: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here and here.

160. Depression: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here and here.

161. Autism: A Very Short Introduction (nf. Oxford University Press). Goodreads review here.

162. Legends (f). Robert Silverberg (editor). Goodreads review here.

January 1, 2018 Posted by | Books, Personal | Leave a comment

Random stuff

I have almost stopped posting posts like these, which has resulted in the accumulation of a very large number of links and studies which I figured I might like to blog at some point. This post is mainly an attempt to deal with the backlog – I won’t cover the material in too much detail.

i. Do Bullies Have More Sex? The answer seems to be a qualified yes. A few quotes:

“Sexual behavior during adolescence is fairly widespread in Western cultures (Zimmer-Gembeck and Helfland 2008) with nearly two thirds of youth having had sexual intercourse by the age of 19 (Finer and Philbin 2013). […] Bullying behavior may aid in intrasexual competition and intersexual selection as a strategy when competing for mates. In line with this contention, bullying has been linked to having a higher number of dating and sexual partners (Dane et al. 2017; Volk et al. 2015). This may be one reason why adolescence coincides with a peak in antisocial or aggressive behaviors, such as bullying (Volk et al. 2006). However, not all adolescents benefit from bullying. Instead, bullying may only benefit adolescents with certain personality traits who are willing and able to leverage bullying as a strategy for engaging in sexual behavior with opposite-sex peers. Therefore, we used two independent cross-sectional samples of older and younger adolescents to determine which personality traits, if any, are associated with leveraging bullying into opportunities for sexual behavior.”

“…bullying by males signal the ability to provide good genes, material resources, and protect offspring (Buss and Shackelford 1997; Volk et al. 2012) because bullying others is a way of displaying attractive qualities such as strength and dominance (Gallup et al. 2007; Reijntjes et al. 2013). As a result, this makes bullies attractive sexual partners to opposite-sex peers while simultaneously suppressing the sexual success of same-sex rivals (Gallup et al. 2011; Koh and Wong 2015; Zimmer-Gembeck et al. 2001). Females may denigrate other females, targeting their appearance and sexual promiscuity (Leenaars et al. 2008; Vaillancourt 2013), which are two qualities relating to male mate preferences. Consequently, derogating these qualities lowers a rivals’ appeal as a mate and also intimidates or coerces rivals into withdrawing from intrasexual competition (Campbell 2013; Dane et al. 2017; Fisher and Cox 2009; Vaillancourt 2013). Thus, males may use direct forms of bullying (e.g., physical, verbal) to facilitate intersexual selection (i.e., appear attractive to females), while females may use relational bullying to facilitate intrasexual competition, by making rivals appear less attractive to males.”

The study relies on the use of self-report data, which I find very problematic – so I won’t go into the results here. I’m not quite clear on how those studies mentioned in the discussion ‘have found self-report data [to be] valid under conditions of confidentiality’ – and I remain skeptical. You’ll usually want data from independent observers (e.g. teacher or peer observations) when analyzing these kinds of things. Note in the context of the self-report data problem that if there’s a strong stigma associated with being bullied (there often is, or bullying wouldn’t work as well), asking people if they have been bullied is not much better than asking people if they’re bullying others.

ii. Some topical advice that some people might soon regret not having followed, from the wonderful Things I Learn From My Patients thread:

“If you are a teenage boy experimenting with fireworks, do not empty the gunpowder from a dozen fireworks and try to mix it in your mother’s blender. But if you do decide to do that, don’t hold the lid down with your other hand and stand right over it. This will result in the traumatic amputation of several fingers, burned and skinned forearms, glass shrapnel in your face, and a couple of badly scratched corneas as a start. You will spend months in rehab and never be able to use your left hand again.”

iii. I haven’t talked about the AlphaZero-Stockfish match, but I was of course aware of it and did read a bit about that stuff. Here’s a reddit thread where one of the Stockfish programmers answers questions about the match. A few quotes:

“Which of the two is stronger under ideal conditions is, to me, neither particularly interesting (they are so different that it’s kind of like comparing the maximum speeds of a fish and a bird) nor particularly important (since there is only one of them that you and I can download and run anyway). What is super interesting is that we have two such radically different ways to create a computer chess playing entity with superhuman abilities. […] I don’t think there is anything to learn from AlphaZero that is applicable to Stockfish. They are just too different, you can’t transfer ideas from one to the other.”

“Based on the 100 games played, AlphaZero seems to be about 100 Elo points stronger under the conditions they used. The current development version of Stockfish is something like 40 Elo points stronger than the version used in Google’s experiment. There is a version of Stockfish translated to hand-written x86-64 assembly language that’s about 15 Elo points stronger still. This adds up to roughly half the Elo difference between AlphaZero and Stockfish shown in Google’s experiment.”

“It seems that Stockfish was playing with only 1 GB for transposition tables (the area of memory used to store data about the positions previously encountered in the search), which is way too little when running with 64 threads.” [I seem to recall a comp sci guy observing elsewhere that this was less than what was available to his smartphone version of Stockfish, but I didn’t bookmark that comment].

“The time control was a very artificial fixed 1 minute/move. That’s not how chess is traditionally played. Quite a lot of effort has gone into Stockfish’s time management. It’s pretty good at deciding when to move quickly, and when to spend a lot of time on a critical decision. In a fixed time per move game, it will often happen that the engine discovers that there is a problem with the move it wants to play just before the time is out. In a regular time control, it would then spend extra time analysing all alternative moves and trying to find a better one. When you force it to move after exactly one minute, it will play the move it already know is bad. There is no doubt that this will cause it to lose many games it would otherwise have drawn.”

iv. Thrombolytics for Acute Ischemic Stroke – no benefit found.

“Thrombolysis has been rigorously studied in >60,000 patients for acute thrombotic myocardial infarction, and is proven to reduce mortality. It is theorized that thrombolysis may similarly benefit ischemic stroke patients, though a much smaller number (8120) has been studied in relevant, large scale, high quality trials thus far. […] There are 12 such trials 1-12. Despite the temptation to pool these data the studies are clinically heterogeneous. […] Data from multiple trials must be clinically and statistically homogenous to be validly pooled.14 Large thrombolytic studies demonstrate wide variations in anatomic stroke regions, small- versus large-vessel occlusion, clinical severity, age, vital sign parameters, stroke scale scores, and times of administration. […] Examining each study individually is therefore, in our opinion, both more valid and more instructive. […] Two of twelve studies suggest a benefit […] In comparison, twice as many studies showed harm and these were stopped early. This early stoppage means that the number of subjects in studies demonstrating harm would have included over 2400 subjects based on originally intended enrollments. Pooled analyses are therefore missing these phantom data, which would have further eroded any aggregate benefits. In their absence, any pooled analysis is biased toward benefit. Despite this, there remain five times as many trials showing harm or no benefit (n=10) as those concluding benefit (n=2), and 6675 subjects in trials demonstrating no benefit compared to 1445 subjects in trials concluding benefit.”

“Thrombolytics for ischemic stroke may be harmful or beneficial. The answer remains elusive. We struggled therefore, debating between a ‘yellow’ or ‘red’ light for our recommendation. However, over 60,000 subjects in trials of thrombolytics for coronary thrombosis suggest a consistent beneficial effect across groups and subgroups, with no studies suggesting harm. This consistency was found despite a very small mortality benefit (2.5%), and a very narrow therapeutic window (1% major bleeding). In comparison, the variation in trial results of thrombolytics for stroke and the daunting but consistent adverse effect rate caused by ICH suggested to us that thrombolytics are dangerous unless further study exonerates their use.”

“There is a Cochrane review that pooled estimates of effect. 17 We do not endorse this choice because of clinical heterogeneity. However, we present the NNT’s from the pooled analysis for the reader’s benefit. The Cochrane review suggested a 6% reduction in disability […] with thrombolytics. This would mean that 17 were treated for every 1 avoiding an unfavorable outcome. The review also noted a 1% increase in mortality (1 in 100 patients die because of thrombolytics) and a 5% increase in nonfatal intracranial hemorrhage (1 in 20), for a total of 6% harmed (1 in 17 suffers death or brain hemorrhage).”

v. Suicide attempts in Asperger Syndrome. An interesting finding: “Over 35% of individuals with AS reported that they had attempted suicide in the past.”

Related: Suicidal ideation and suicide plans or attempts in adults with Asperger’s syndrome attending a specialist diagnostic clinic: a clinical cohort study.

“374 adults (256 men and 118 women) were diagnosed with Asperger’s syndrome in the study period. 243 (66%) of 367 respondents self-reported suicidal ideation, 127 (35%) of 365 respondents self-reported plans or attempts at suicide, and 116 (31%) of 368 respondents self-reported depression. Adults with Asperger’s syndrome were significantly more likely to report lifetime experience of suicidal ideation than were individuals from a general UK population sample (odds ratio 9·6 [95% CI 7·6–11·9], p<0·0001), people with one, two, or more medical illnesses (p<0·0001), or people with psychotic illness (p=0·019). […] Lifetime experience of depression (p=0·787), suicidal ideation (p=0·164), and suicide plans or attempts (p=0·06) did not differ significantly between men and women […] Individuals who reported suicide plans or attempts had significantly higher Autism Spectrum Quotient scores than those who did not […] Empathy Quotient scores and ages did not differ between individuals who did or did not report suicide plans or attempts (table 4). Patients with self-reported depression or suicidal ideation did not have significantly higher Autism Spectrum Quotient scores, Empathy Quotient scores, or age than did those without depression or suicidal ideation”.

The fact that people with Asperger’s are more likely to be depressed and contemplate suicide is consistent with previous observations that they’re also more likely to die from suicide – for example a paper I blogged a while back found that in that particular (large Swedish population-based cohort-) study, people with ASD were more than 7 times as likely to die from suicide than were the comparable controls.

Also related: Suicidal tendencies hard to spot in some people with autism.

This link has some great graphs and tables of suicide data from the US.

Also autism-related: Increased perception of loudness in autism. This is one of the ‘important ones’ for me personally – I am much more sound-sensitive than are most people.

vi. Early versus Delayed Invasive Intervention in Acute Coronary Syndromes.

“Earlier trials have shown that a routine invasive strategy improves outcomes in patients with acute coronary syndromes without ST-segment elevation. However, the optimal timing of such intervention remains uncertain. […] We randomly assigned 3031 patients with acute coronary syndromes to undergo either routine early intervention (coronary angiography ≤24 hours after randomization) or delayed intervention (coronary angiography ≥36 hours after randomization). The primary outcome was a composite of death, myocardial infarction, or stroke at 6 months. A prespecified secondary outcome was death, myocardial infarction, or refractory ischemia at 6 months. […] Early intervention did not differ greatly from delayed intervention in preventing the primary outcome, but it did reduce the rate of the composite secondary outcome of death, myocardial infarction, or refractory ischemia and was superior to delayed intervention in high-risk patients.”

vii. Some wikipedia links:

Behrens–Fisher problem.
Sailing ship tactics (I figured I had to read up on this if I were to get anything out of the Aubrey-Maturin books).
Anatomical terms of muscle.
Phatic expression (“a phatic expression […] is communication which serves a social function such as small talk and social pleasantries that don’t seek or offer any information of value.”)
Three-domain system.
Beringian wolf (featured).
Subdural hygroma.
Cayley graph.
Schur polynomial.
Solar neutrino problem.
Hadamard product (matrices).
True polar wander.
Newton’s cradle.

viii. Determinant versus permanent (mathematics – technical).

ix. Some years ago I wrote a few English-language posts about some of the various statistical/demographic properties of immigrants living in Denmark, based on numbers included in a publication by Statistics Denmark. I did it by translating the observations included in that publication, which was only published in Danish. I was briefly considering doing the same thing again when the 2017 data arrived, but I decided not to do it as I recalled that it took a lot of time to write those posts back then, and it didn’t seem to me to be worth the effort – but Danish readers might be interested to have a look at the data, if they haven’t already – here’s a link to the publication Indvandrere i Danmark 2017.

x. A banter blitz session with grandmaster Peter Svidler, who recently became the first Russian ever to win the Russian Chess Championship 8 times. He’s currently shared-second in the World Rapid Championship after 10 rounds and is now in the top 10 on the live rating list in both classical and rapid – seems like he’s had a very decent year.

xi. I recently discovered Dr. Whitecoat’s blog. The patient encounters are often interesting.

December 28, 2017 Posted by | Astronomy, autism, Biology, Cardiology, Chess, Computer science, History, Mathematics, Medicine, Neurology, Physics, Psychiatry, Psychology, Random stuff, Statistics, Studies, Wikipedia, Zoology | Leave a comment


It’s been a while since I posted one of these posts.

I know for certain that quite a few of the words included below are words which I encountered while reading the Jim Butcher books Ghost Story, Cold Days, and Skin Game, and I also know that some of the ones I added to the post more recently were words I encountered while reading the Oxford Handbook of Endocrinology and Diabetes. Almost half of the words were however words which had just been added at some point in the past to a list I keep of words I’d like to eventually include in posts like these; that list had grown rather long and unwieldy so I decided to include a lot of words from that list in this post – I have almost no idea where I encountered most of those words (I try to add to that list whenever I encounter a word I particularly like or a word with which I’m not familiar, regardless of the source, and I usually do not add a source).

Chemosis. Asthenia. Arcuate. Onycholysis. Nubble. Colliery. Fomite. Riparian. Guglet/goglet. Limbus. Stupe. Osier. Synostosis. Amscray. Slosh. Dowel. Swill. Tocometer. Raster. Squab.

Antiquer. Ritzy. Boutonniere. Exfiltrate. Lurch. Placard. Futz. Bleary. Scapula. Bobble. Frigorific. Skerry. Trotter. Raffinate. Truss. Despoliation. Primogeniture. Whelp. Ethmoid. Rollick.

Fireplug. Taupe. Obviate. Koi. Doughboy. Guck. Flophouse. Vane. Gast. Chastisement. Rink. Wakizashi. Culvert. Lickety-split. Whipsaw. Spall. Tine. Nadir. Periwinkle. Pitter-patter.

Sidle. Iridescent. Feint. Flamberge. Batten. Gangplank. Meander. Flunky. Futz. Thwack. Prissy. Vambrace. Tasse. Smarmy. Abut. Jounce. Wright. Ebon. Skin game. Shimmer.

December 27, 2017 Posted by | Books, Language | Leave a comment