Econstudentlog

Rivers (II)

Some more observations from the book and related links below.

“By almost every measure, the Amazon is the greatest of all the large rivers. Encompassing more than 7 million square kilometres, its drainage basin is the largest in the world and makes up 5% of the global land surface. The river accounts for nearly one-fifth of all the river water discharged into the oceans. The flow is so great that water from the Amazon can still be identified 125 miles out in the Atlantic […] The Amazon has some 1,100 tributaries, and 7 of these are more than 1,600 kilometres long. […] In the lowlands, most Amazonian rivers have extensive floodplains studded with thousands of shallow lakes. Up to one-quarter of the entire Amazon Basin is periodically flooded, and these lakes become progressively connected with each other as the water level rise.”

“To hydrologists, the term ‘flood’ refers to a river’s annual peak discharge period, whether the water inundates the surrounding landscape or not. In more common parlance, however, a flood is synonymous with the river overflowing it’s banks […] Rivers flood in the natural course of events. This often occurs on the floodplain, as the name implies, but flooding can affect almost all of the length of the river. Extreme weather, particularly heavy or protracted rainfall, is the most frequent cause of flooding. The melting of snow and ice is another common cause. […] River floods are one of the most common natural hazards affecting human society, frequently causing social disruption, material damage, and loss of life. […] Most floods have a seasonal element in their occurence […] It is a general rule that the magnitude of a flood is inversely related to its frequency […] Many of the less predictable causes of flooding occur after a valley has been blocked by a natural dam as a result of a landslide, glacier, or lava flow. Natural dams may cause upstream flooding as the blocked river forms a lake and downstream flooding as a result of failure of the dam.”

“The Tigris-Euphrates, Nile, and Indus are all large, exotic river systems, but in other respects they are quite different. The Nile has a relatively gentle gradient in Egypt and a channel that has experienced only small changes over the last few thousand years, by meander cut-off and a minor shift eastwards. The river usually flooded in a regular and predictable way. The stability and long continuity of the Egyptian civilization may be a reflection of its river’s relative stability. The steeper channel of the Indus, by contrast, has experienced major avulsions over great distances on the lower Indus Plain and some very large floods caused by the failure of glacier ice dams in the Himalayan mountains. Likely explanations for the abandonment of many Harappan cities […] take account of damage caused by major floods and/or the disruption caused by channel avulsion leading to a loss of water supply. Channel avulsion was also a problem for the Sumerian civilization on the alluvial plain called Mesopotamia […] known for the rise and fall of its numerous city states. Most of these cities were situated along the Euphrates River, probably because it was more easily controlled for irrigation purposes than the Tigris, which flowed faster and carried much more water. However, the Euphrates was an anastomosing river with multiple channels that diverge and rejoin. Over time, individual branch channels ceased to flow as others formed, and settlements located on these channels inevitably declined and were abandoned as their water supply ran dry, while others expanded as their channels carried greater amounts of water.”

“During the colonization of the Americas in the mid-18th century and the imperial expansion into Africa and Asia in the late 19th century, rivers were commonly used as boundaries because they were the first, and frequently the only, features mapped by European explorers. The diplomats in Europe who negotiated the allocation of colonial territories claimed by rival powers knew little of the places they were carving up. Often, their limited knowledge was based solely on maps that showed few details, rivers being the only distinct physical features marked. Today, many international river boundaries remain as legacies of those historical decisions based on poor geographical knowledge because states have been reluctant to alter their territorial boundaries from original delimitation agreements. […] no less than three-quarters of the world’s international boundaries follow rivers for at least part of their course. […] approximately 60% of the world’s fresh water is drawn from rivers shared by more than one country.”

“The sediments carried in rivers, laid down over many years, represent a record of the changes that have occurred in the drainage basin through the ages. Analysis of these sediments is one way in which physical geographers can interpret the historical development of landscapes. They can study the physical and chemical characteristics of the sediments itself and/or the biological remains they contain, such as pollen or spores. […] The simple rate at which material is deposited by a river can be a good reflection of how conditions have changed in the drainage basin. […] Pollen from surrounding plants is often found in abundance in fluvial sediments, and the analysis of pollen can yield a great deal of information about past conditions in an area. […] Very long sediment cores taken from lakes and swamps enable us to reconstruct changes in vegetation over very long time periods, in some cases over a million years […] Because climate is a strong determinant of vegetation, pollen analysis has also proved to be an important method for tracing changes in past climates.”

“The energy in flowing and falling water has been harnessed to perform work by turning water-wheels for more than 2,000 years. The moving water turns a large wheel and a shaft connected to the wheel axle transmits the power from the water through a system of gears and cogs to work machinery, such as a millstone to grind corn. […] The early medieval watermill was able to do the work of between 30 and 60 people, and by the end of the 10th century in Europe, waterwheels were commonly used in a wide range of industries, including powering forge hammers, oil and silk mills, sugar-cane crushers, ore-crushing mills, breaking up bark in tanning mills, pounding leather, and grinding stones. Nonetheless, most were still used for grinding grains for preparation into various types of food and drink. The Domesday Book, a survey prepared in England in AD 1086, lists 6,082 watermills, although this is probably a conservative estimate because many mills were not recorded in the far north of the country. By 1300, this number had risen to exceed 10,000. [..] Medieval watermills typically powered their wheels by using a dam or weir to concentrate the falling water and pond a reserve supply. These modifications to rivers became increasingly common all over Europe, and by the end of the Middle Ages, in the mid-15th century, watermills were in use on a huge number of rivers and streams. The importance of water power continued into the Industrial Revolution […]. The early textile factories were built to produce cloth using machines driven by waterwheels, so they were often called mills. […] [Today,] about one-third of all countries rely on hydropower for more than half their electricity. Globally, hydropower provides about 20% of the world’s total electricity supply.”

“Deliberate manipulation of river channels through engineering works, including dam construction, diversion, channelization, and culverting, […] has a long history. […] In Europe today, almost 80% of the total discharge of the continent’s major rivers is affected by measures designed to regulate flow, whether for drinking water supply, hydroelectric power generation, flood control, or any other reason. The proportion in individual countries is higher still. About 90% of rivers in the UK are regulated as a result of these activities, while in the Netherlands this percentage is close to 100. By contrast, some of the largest rivers on other continents, including the Amazon and the Congo, are hardly manipulated at all. […] Direct and intentional modifications to rivers are complemented by the impacts of land use and land use changes which frequently result in the alteration of rivers as an unintended side effect. Deforestation, afforestation, land drainage, agriculture, and the use of fire have all had significant impacts, with perhaps the most extreme effects produced by construction activity and urbanization. […] The major methods employed in river regulation are the construction of large dams […], the building of run-of-river impoundments such as weirs and locks, and by channelization, a term that covers a range of river engineering works including widening, deepening, straightening, and the stabilization of banks. […] Many aspects of a dynamic river channel and its associated ecosystems are mutually adjusting, so a human activity in a landscape that affects the supply of water or sediment is likely to set off a complex cascade of other alterations.”

“The methods of storage (in reservoirs) and distribution (by canal) have not changed fundamentally since the earliest river irrigation schemes, with the exception of some contemporary projects’ use of pumps to distribute water over greater distances. Nevertheless, many irrigation canals still harness the force of gravity. Half the world’s large dams (defined as being 15 metres or higher) were built exclusively or primarily for irrigation, and about one-third of the world’s irrigated cropland relies on reservoir water. In several countries, including such populous nations as India and China, more than 50% of arable land is irrigated by river water supplied from dams. […] Sadly, many irrigation schemes are not well managed and a number of environmental problems are frequently experienced as a result, both on-site and off-site. In many large networks of irrigation canals, less than half of the water diverted from a river or reservoir actually benefits crops. A lot of water seeps away through unlined canals or evaporates before reaching the fields. Some also runs off the fields or infiltrates through the soil, unused by plants, because farmers apply too much water or at the wrong time. Much of this water seeps back into nearby streams or joins underground aquifers, so can be used again, but the quality of water may deteriorate if it picks up salts, fertilizers, or pesticides. Excessive applications of irrigation water often result in rising water tables beneath fields, causing salinization and waterlogging. These processes reduce crop yields on irrigation schemes all over the world.”

“[Deforestation can contribute] to the degradation of aquatic habitats in numerous ways. The loss of trees along river banks can result in changes in the species found in the river because fewer trees means a decline in plant matter and insects falling from them, items eaten by some fish. Fewer trees on river banks also results in less shade. More sunlight reaching the river results in warmer water and the enhanced growth of algae. A change in species can occur as fish that feed on falling food are edged out by those able to feed on algae. Deforestation also typically results in more runoff and more soil erosion. This sediment may cover spawning grounds, leading to lower reproduction rates. […] Grazing and trampling by livestock reduces vegetation cover and causes the compaction of soil, which reduces its infiltration capacity. As rainwater passes over or through the soil in areas of intensive agriculture, it picks up residues from pesticides and fertilizers and transport them to rivers. In this way, agriculture has become a leading source of river pollution in certain parts of the world. Concentration of nitrates and phosphates, derived from fertilizers, have risen notably in many rivers in Europe and North America since the 1950s and have led to a range of […] problems encompassed under the term ‘eutrophication’ – the raising of biological productivity caused by nutrient enrichment. […] In slow-moving rivers […] the growth of algae reduces light penetration and depletes the oxygen in the water, sometimes causing fish kills.”

“One of the most profound ways in which people alter rivers is by damming them. Obstructing a river and controlling its flow in this way brings about a raft of changes. A dam traps sediments and nutrients, alters the river’s temperature and chemistry, and affects the processes of erosion and deposition by which the river sculpts the landscape. Dams create more uniform flow in rivers, usually by reducing peak flows and increasing minimum flows. Since the natural variation in flow is important for river ecosystems and their biodiversity, when dams even out flows the result is commonly fewer fish of fewer species. […] the past 50 years or so has seen a marked escalation in the rate and scale of construction of dams all over the world […]. At the beginning of the 21st century, there were about 800,000 dams worldwide […] In some large river systems, the capacity of dams is sufficient to hold more than the entire annual discharge of the river. […] Globally, the world’s major reservoirs are thought to control about 15% of the runoff from the land. The volume of water trapped worldwide in reservoirs of all sizes is no less than five times the total global annual river flow […] Downstream of a reservoir, the hydrological regime of a river is modified. Discharge, velocity, water quality, and thermal characteristics are all affected, leading to changes in the channel and its landscape, plants, and animals, both on the river itself and in deltas, estuaries, and offshore. By slowing the flow of river water, a dam acts as a trap for sediment and hence reduces loads in the river downstream. As a result, the flow downstream of the dam is highly erosive. A relative lack of silt arriving at a river’s delta can result in more coastal erosion and the intrusion of seawater that brings salt into delta ecosystems. […] The dam-barrier effect on migratory fish and their access to spawning grounds has been recognized in Europe since medieval times.”

“One of the most important effects cities have on rivers is the way in which urbanization affects flood runoff. Large areas of cities are typically impermeable, being covered by concrete, stone, tarmac, and bitumen. This tends to increase the amount of runoff produced in urban areas, an effect exacerbated by networks of storm drains and sewers. This water carries relatively little sediment (again, because soil surfaces have been covered by impermeable materials), so when it reaches a river channel it typically causes erosion and widening. Larger and more frequent floods are another outcome of the increase in runoff generated by urban areas. […] It […] seems very likely that efforts to manage the flood hazard on the Mississippi have contributed to an increased risk of damage from tropical storms on the Gulf of Mexico coast. The levées built along the river have contributed to the loss of coastal wetlands, starving them of sediment and fresh water, thereby reducing their dampening effect on storm surge levels. This probably enhanced the damage from Hurricane Katrina which struck the city of New Orleans in 2005.”

Links:

Onyx River.
Yangtze. Yangtze floods.
Missoula floods.
Murray River.
Ganges.
Thalweg.
Southeastern Anatolia Project.
Water conflict.
Hydropower.
Fulling mill.
Maritime transport.
Danube.
Lock (water navigation).
Hydrometry.
Yellow River.
Aswan High Dam. Warragamba Dam. Three Gorges Dam.
Onchocerciasis.
River restoration.

Advertisements

January 16, 2018 Posted by | Biology, Books, Ecology, Engineering, Geography, Geology, History | Leave a comment

Endocrinology (part 2 – pituitary)

Below I have added some observations from the second chapter of the book, which covers the pituitary gland.

“The pituitary gland is centrally located at the base of the brain in the sella turcica within the sphenoid bone. It is attached to the hypothalamus by the pituitary stalk and a fine vascular network. […] The pituitary measures around 13mm transversely, 9mm anteroposteriorly, and 6mm vertically and weighs approximately 100mg. It increases during pregnancy to almost twice its normal size, and it decreases in the elderly. *Magnetic resonance imaging (MRI) currently provides the optimal imaging of the pituitary gland. *Computed tomography (CT) scans may still be useful in demonstrating calcification in tumours […] and hyperostosis in association with meningiomas or evidence of bone destruction. […] T1– weighted images demonstrate cerebrospinal fluid (CSF) as dark grey and brain as much whiter. This imagining is useful for demonstrating anatomy clearly. […] On T1– weighted images, pituitary adenomas are of lower signal intensity than the remainder of the normal gland. […] The presence of microadenomas may be difficult to demonstrate.”

“Hypopituitarism refers to either partial or complete deficiency of anterior and/or posterior pituitary hormones and may be due to [primary] pituitary disease or to hypothalamic pathology which interferes with the hypothalamic control of the pituitary. Causes: *Pituitary tumours. *Parapituitary tumours […] *Radiotherapy […] *Pituitary infarction (apoplexy), Sheehan’s syndrome. *Infiltration of the pituitary gland […] *infection […] *Trauma […] *Subarachnoid haemorrhage. *Isolated hypothalamic-releasing hormone deficiency, e.g. Kallmann’s syndrome […] *Genetic causes [Let’s stop here: Point is, lots of things can cause pituitary problems…] […] The clinical features depend on the type and degree of hormonal deficits, and the rate of its development, in addition to whether there is intercurrent illness. In the majority of cases, the development of hypopituitarism follows a characteristic order, which secretion of GH [growth hormone, US], then gonadotrophins being affected first, followed by TSH [Thyroid-Stimulating Hormone, US] and ACTH [Adrenocorticotropic Hormone, US] secretion at a later stage. PRL [prolactin, US] deficiency is rare, except in Sheehan’s syndrome associated with failure of lactation. ADH [antidiuretic hormone, US] deficiency is virtually unheard of with pituitary adenomas but may be seen rarely with infiltrative disorders and trauma. The majority of the clinical features are similar to those occurring when there is target gland insufficiency. […] NB Houssay phenomenon. Amelioration of diabetes mellitus in patients with hypopituitarism due to reduction in counter-regulatory hormones. […] The aims of investigation of hypopituitarism are to biochemically assess the extent of pituitary hormone deficiency and also to elucidate the cause. […] Treatment involves adequate and appropriate hormone replacement […] and management of the underlying cause.”

“Apoplexy refers to infarction of the pituitary gland due to either haemorrhage or ischaemia. It occurs most commonly in patients with pituitary adenomas, usually macroadenomas […] It is a medical emergency, and rapid hydrocortisone replacement can be lifesaving. It may present with […] sudden onset headache, vomiting, meningism, visual disturbance, and cranial nerve palsy.”

“Anterior pituitary hormone replacement therapy is usually performed by replacing the target hormone rather than the pituitary or hypothalamic hormone that is actually deficient. The exceptions to this are GH replacement […] and when fertility is desired […] [In the context of thyroid hormone replacement:] In contrast to replacement in [primary] hypothyroidism, the measurement of TSH cannot be used to assess adequacy of replacment in TSH deficiency due to hypothalamo-pituitary disease. Therefore, monitoring of treatment in order to avoid under- and over-replacement should be via both clinical assessment and by measuring free thyroid hormone concentrations […] [In the context of sex hormone replacement:] Oestrogen/testosterone administration is the usual method of replacement, but gonadotrophin therapy is required if fertility is desired […] Patients with ACTH deficiency usually need glucocorticoid replacement only and do not require mineralcorticoids, in contrast to patients with Addison’s disease. […] Monitoring of replacement [is] important to avoid over-replacement which is associated with BP, elevated glucose and insulin, and reduced bone mineral density (BMD). Under-replacement leads to the non-specific symptoms, as seen in Addison’s disease […] Conventional replacement […] may overtreat patients with partial ACTH deficiency.”

“There is now a considerable amount of evidence that there are significant and specific consequences of GH deficiency (GDH) in adults and that many of these features improve with GH replacement therapy. […] It is important to differentiate between adult and childhood onset GDH. […] the commonest cause in childhood is an isolated variable deficiency of GH-releasing hormone (GHRH) which may resolve in adult life […] It is, therefore, important to retest patients with childhood onset GHD when linear growth is completed (50% recovery of this group). Adult onset. GHD usually occurs [secondarily] to a structural pituitary or parapituitary condition or due to the effects of surgical treatment or radiotherapy. Prevalence[:] *Adult onset GHD 1/10,000 *Adult GHD due to adult and childhood onset GHD 3/10,000. Benefits of GH replacement[:] *Improved QoL and psychological well-being. *Improved exercise capacity. *↑ lean body mass and reduced fat mass. *Prolonged GH replacement therapy (>12-24 months) has been shown to increase BMD, which would be expected to reduce fracture rate. *There are, as yet, no outcome studies in terms of cardiovascular mortality. However, GH replacement does lead to a reduction (~15%) in cholesterol. GH replacement also leads to improved ventricular function and ↑ left ventricular mass. […] All patients with GHD should be considered for GH replacement therapy. […] adverse effects experienced with GH replacement usually resolve with dose reduction […] GH treatment may be associated with impairment of insulin sensitivity, and therefore markers of glycemia should be monitored. […] Contraindications to GH replacement[:] *Active malignancy. *Benign intracranial hypertension. *Pre-proliferative/proliferative retinopathy in diabetes mellitus.”

“*Pituitary adenomas are the most common pituitary disease in adults and constitute 10-15% of primary brain tumours. […] *The incidence of clinically apparent pituitary disease is 1 in 10,000. *Pituitary carcinoma is very rare (<0.1% of all tumours) and is most commonly ACTH- or prolactin-secreting. […] *Microadenoma <1cm. *Macroadenoma >1cm. [In terms of the functional status of tumours, the break-down is as follows:] *Prolactinoma 35-40%. *Non-functioning 30-35%. Growth hormone (acromegaly) 10-15%. *ACTH adenoma (Cushing’s disease) 5-10% *TSH adenoma <5%. […] Pituitary disease is associated with an increased mortality, predominantly due to vascular disease. This may be due to oversecretion of GH or ACTH, hormone deficiencies or excessive replacement (e.g. of hydrocortisone).”

“*Prolactinomas are the commonest functioning pituitary tumour. […] Malignant prolactinomas are very rare […] [Clinical features of hyperprolactinaemia:] *Galactorrhoea (up to 90%♀, <10% ♂). *Disturbed gonadal function [menstrual disturbance, infertility, reduced libido, ED in ♂] […] Hyperprolactinaemia is associated with a long-term risk of BMD. […] Hypothyroidism and chronic renal failure are causes of hyperprolactinaemia. […] Antipsychotic agents are the most likely psychotrophic agents to cause hyperprolactinaemia. […] Macroadenomas are space-occupying tumours, often associated with bony erosion and/or cavernous sinus invasion. […] *Invasion of the cavernous sinus may lead to cranial nerve palsies. *Occasionally, very invasive tumours may erode bone and present with a CSF leak or [secondary] meningitis. […] Although microprolactinomas may expand in size without treatment, the vast majority do not. […] Macroprolactinomas, however, will continue to expand and lead to pressure effects. Definite treatment of the tumour is, therefore, necessary.”

“Dopamine agonist treatment […] leads to suppression of PRL in most patients [with prolactinoma], with [secondary] effects of normalization of gonadal function and termination of galactorrhoea. Tumour shrinkage occurs at a variable rate (from 24h to 6-12 months) and extent and must be carefully monitored. Continued shrinkage may occur for years. Slow chiasmal decompression will correct visual field defects in the majority of patients, and immediate surgical decompression is not necessary. […] Cabergoline is more effective in normalization of PRL in microprolactinoma […], with fewer side effects than bromocriptine. […] Tumour enlargement following initial shrinkage on treatment is usually due to non-compliance. […] Since the introduction of dopamine agonist treatment, transsphenoidal surgery is indicated only for patients who are resistant to, or intolerant of, dopamine agonist treatment. The cure rate for macroprolactinomas treated with surgery is poor (30%), and, therefore, drug treatment is first-line in tumours of all size. […] Standard pituitary irradiation leads to slow reduction (over years) of PRL in the majority of patients. […] Radiotherapy is not indicated in the management of patients with microprolactinomas. It is useful in the treatment of macroprolactinomas once the tumour has been shrunken away from the chiasm, only if the tumour is resistant.”

“Acromegaly is the clinical condition resulting from prolonged excessive GH and hence IGF-1 secretion in adults. GH secretion is characterized by blunting of pulsatile secretion and failure of GH to become undetectable during the 24h day, unlike normal controls. […] *Prevalence 40-86 cases/million population. Annual incidence of new cases in the UK is 4/million population. *Onset is insidious, and there is, therefore, often a considerable delay between onset of clinical features and diagnosis. Most cases are diagnosed at 40-60 years. […] Pituitary gigantism [is] [t]he clinical syndrome resulting from excess GH secretion in children prior to fusion of the epiphyses. […] growth velocity without premature pubertal manifestations should arouse suspicion of pituitary gigantism. […] Causes of acromegaly[:] *Pituitary adenoma (>99% of cases). Macroadenomas 60-80%, microadenomas 20-40%. […] The clinical features arise from the effects of excess GH/IGF-1, excess PRL in some (as there is co-secretion of PRL in a minority (30%) of tumours […] and the tumour mass. [Signs and symptoms:] * sweating -> 80% of patients. *Headaches […] *Tiredness and lethargy. *Joint pains. *Change in ring or shoe size. *Facial appearance. Coarse features […] enlarged nose […] prognathism […] interdental separation. […] Enlargement of hands and feet […] [Complications:] *Hypertension (40%). *Insulin resistance and impaired glucose tolerance (40%)/diabetes mellitus (20%). *Obstructive sleep apnea – due to soft tissue swelling […] Ischaemic heart disease and cerebrovascular disease.”

“Management of acromegaly[:] The management strategy depends on the individual patient and also on the tumour size. Lowering of GH is essential in all situations […] Transsphenoidal surgery […] is usually the first line for treatment in most centres. *Reported cure rates vary: 40-91% for microadenomas and 10-48% for macroadenomas, depending on surgical expertise. […] Using the definition of post-operative cure as mean GH <2.5 micrograms/L, the reported recurrence rate is low (6% at 5 years). Radiotherapy […] is usually reserved for patients following unsuccessful transsphenoidal surgery, only occasionally is it used as [primary] therapy. […] normalization of mean GH may take several years and, during this time, adjunctive medical treatment (usually with somatostatin analogues) is required. […] Radiotherapy can induce GH deficiency which may need GH therapy. […] Somatostatin analogues lead to suppresion of GH secretion in 20-60% of patients with acromegaly. […] some patients are partial responders, and although somatostatin analogues will lead to lowering of mean GH, they do not suppress to normal despite dose escalation. These drugs may be used as [primary] therapy where the tumour does not cause mass effects or in patients who have received surgery and/or radiotherapy who have elevated mean GH. […] Dopamine agonists […] lead to lowering of GH levels but, very rarely, lead to normalization of GH or IGF-1 (<30%). They may be helpful, particularly if there is coexistent secretion of PRL, and, in these cases, there may be significant tumour shrinkage. […] GH receptor antagonists [are] [i]ndicated for somatostatin non-responders.”

“Cushing’s syndrome is an illness resulting from excess cortisol secretion, which has a high mortality if left untreated. There are several causes of hypercortisolaemia which must be differentiated, and the commonest cause is iatrogenic (oral, inhaled, or topical steroids). […] ACTH-dependent Cushing’s must be differentiated from ACTH-independent disease (usually due to an adrenal adenoma, or, rarely, carcinoma […]). Once a diagnosis of ACTH-dependent disease has been established, it is important to differentiate between pituitary-dependent (Cushing’s disease) and ectopic secretion. […] [Cushing’s disease is rare;] annual incidence approximately 2/million. The vast majority of Cushing’s syndrome is due to a pituitary ACTH-secreting corticotroph microadenoma. […] The features of Cushing’s syndrome are progressive and may be present for several years prior to diagnosis. […] *Facial appearance – round plethoric complexion, acne and hirsutism, thinning of scalp hair. *Weight gain – truncal obesity, buffalo hump […] *Skin – thin and fragile […] easy bruising […] *Proximal muscle weakness. *Mood disturbance – labile, depression, insomnia, psychosis. *Menstrual disturbance. *Low libido and impotence. […] Associated features [include:] *Hypertension (>50%) due to mineralocorticoid effects of cortisol […] *Impaired glucose tolerance/diabetes mellitus (30%). *Osteopenia and osteoporosis […] *Vascular disease […] *Susceptibility to infections. […] Cushing’s is associated with a hypercoagulable state, with increased cardiovascular thrombotic risks. […] Hypercortisolism suppresses the thyroidal, gonadal, and GH axes, leading to lowered levels of TSH and thyroid hormones as well as reduced gonadotrophins, gonadal steroids, and GH.”

“Treatment of Cushing’s disease[:] Transsphenoidal surgery [is] the first-line option in most cases. […] Pituitary radiotherapy [is] usually administered as second-line treatment, following unsuccessful transsphenoidal surgery. […] Medical treatment [is] indicated during the preoperative preparation of patients or while awaiting radiotherapy to be effective or if surgery or radiotherapy are contraindicated. *Inhibitors of steroidogenesis: metyrapone is usually used first-line, but ketoconazole should be used as first-line in children […] Disadvantage of these agents inhibiting steroidogenesis is the need to increase the dose to maintain control, as ACTH secretion will increase as cortisol concentrations decrease. […] Successful treatment (surgery or radiotherapy) of Cushing’s disease leads to cortisol deficiency and, therefore, glucocorticoid replacement therapy is essential. […] *Untreated [Cushing’s] disease leads to an approximately 30-50% mortality at 5 years, owing to vascular disease and susceptibility to infections. *Treated Cushing’s syndrome has a good prognosis […] *Although the physical features and severe psychological disorders associated with Cushing’s improve or resolve within weeks or months of successful treatment, more subtle mood disturbance may persist for longer. Adults may also have impaired cognitive function. […] it is likely that there is an cardiovascular risk. *Osteoporosis will usually resolve in children but may not improve significantly in older patients. […] *Hypertension has been shown to resolve in 80% and diabetes mellitus in up to 70%. *Recent data suggests that mortality even with successful treatment of Cushing’s is increased significantly.”

“The term incidentaloma refers to an incidentally detected lesion that is unassociated with hormonal hyper- or hyposecretion and has a benign natural history. The increasingly frequent detection of these lesions with technological improvements and more widespread use of sophisticated imaging has led to a management challenge – which, if any, lesions need investigation and/or treatment, and what is the optimal follow-up strategy (if required at all)? […] *Imaging studies using MRI demonstrate pituitary microadenomas in approximately 10% of normal volunteers. […] Clinically significant pituitary tumours are present in about 1 in 1,000 patients. […] Incidentally detected microadenomas are very unlikely (<10%) to increase in size whereas larger incidentally detected meso- and macroadenomas are more likely (40-50%) to enlarge. Thus, conservative management in selected patients may be appropriate for microadenomas which are incidentally detected […]. Macroadenomas should be treated, if possible.”

“Non-functioning pituitary tumours […] are unassociated with clinical syndromes of anterior pituitary hormone excess. […] Non-functioning pituitary tumours (NFA) are the commonest pituitary macroadenoma. They represent around 28% of all pituitary tumours. […] 50% enlarge, if left untreated, at 5 years. […] Tumour behaviour is variable, with some tumours behaving in a very indolent, slow-growing manner and others invading the sphenoid and cavernous sinus. […] At diagnosis, approximately 50% of patients are gonadotrophin-deficient. […] The initial definitive management in virtually every case is surgical. This removes mass effects and may lead to some recovery of pituitary function in around 10%. […] The use of post-operative radiotherapy remains controversial. […] The regrowth rate at 10 years without radiotherapy approaches 45% […] administration of post-operative radiotherapy reduces this regrowth rate to <10%. […] however, there are sequelae to radiotherapy – with a significant long-term risk of hypopituitarism and a possible risk of visual deterioration and malignancy in the field of radiation. […] Unlike the case for GH- and PRL-secreting tumours, medical therapy for NFAs is usually unhelpful […] Gonadotrophinomas […] are tumours that arise from the gonadotroph cells of the pituitary gland and produce FSH, LH, or the α subunit. […] they are usually silent and unassociated with excess detectable secretion of LH and FSH […] [they] present in the same manner as other non-functioning pituitary tumours, with mass effects and hypopituitarism […] These tumours are managed as non-functioning tumours.”

“The posterior lobe of the pituitary gland arises from the forebrain and comprises up to 25% of the normal adult pituitary gland. It produces arginine vasopressin and oxytocin. […] Oxytoxin has no known role in ♂ […] In ♀, oxytoxin contracts the pregnant uterus and also causes breast duct smooth muscle contraction, leading to breast milk ejection during breastfeeding. […] However, oxytoxin deficiency has no known adverse effect on parturition or breastfeeding. […] Arginine vasopressin is the major determinant of renal water excretion and, therefore, fluid balance. It’s main action is to reduce free water clearance. […] Many substances modulate vasopressin secretion, including the catecholamines and opioids. *The main site of action of vasopressin is in the collecting duct and the thick ascending loop of Henle […] Diabetes Insipidus (DI) […] is defined as the passage of large volumes (>3L/24h) of dilute urine (osmolality <300mOsm/kg). [It may be] [d]ue to deficiency of circulating arginine vasopressin [or] [d]ue to renal resistance to vasopressin.” […lots of other causes as well – trauma, tumours, inflammation, infection, vascular, drugs, genetic conditions…]

Hyponatraemia […] Incidence *1-6% of hospital admissions Na<130mmol/L. *15-22% hospital admissions Na<135mmol/L. […] True clinically apparent hyponatraemia is associated with either excess water or salt deficiency. […] Features *Depend on the underlying cause and also on the rate of development of hyponatraemia. May develop once sodium reaches 115mmol/L or earlier if the fall is rapid. Level at 100mmol/L or less is life-threatening. *Features of excess water are mainly neurological because of brain injury […] They include confusion and headache, progressing to seizures and coma. […] SIADH [Syndrome of Inappropriate ADH, US] is a common cause of hyponatraemia. […] The elderly are more prone to SIADH, as they are unable to suppress ADH as efficiently […] ↑ risk of hyponatraemia with SSRIs. […] rapid overcorrection of hyponatraemia may cause central pontine myelinolysis (demyelination).”

“The hypothalamus releases hormones that act as releasing hormones at the anterior pituitary gland. […] The commonest syndrome to be associated with the hypothalamus is abnormal GnRH secretion, leading to reduced gonadotrophin secretion and hypogonadism. Common causes are stress, weight loss, and excessive exercise.”

January 14, 2018 Posted by | Books, Cancer/oncology, Cardiology, Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Ophthalmology, Pharmacology | Leave a comment

Rivers (I)

I gave the book one star on goodreads. My review on goodreads explains why. In this post I’ll disregard the weak parts of the book and only cover ‘the good stuff’. Part of the reason why I gave the book one star instead of two was that I wanted to punish the author for wasting my time with irrelevant stuff when it was clear to me that he could actually have been providing useful information instead; some parts of the book are quite good.

Some quotes and links below.

“[W]ater is continuously on the move, being recycled between the land, oceans, and atmosphere: an eternal succession known as the hydrological cycle. Rivers play a key role in the hydrological cycle, draining water from the land and moving it ultimately to the sea. Any rain or melted snow that doesn’t evaporate or seep into the earth flows downhill over the land surface under the influence of gravity. This flow is channelled by small irregularities in the topography into rivulets that merge to become gullies that feed into larger channels. The flow of rivers is augmented with water flowing through the soil and from underground stores, but a river is more than simply water flowing to the sea. A river also carries rocks and other sediments, dissolved minerals, plants, and animals, both dead and alive. In doing so, rivers transport large amounts of material and provide habitats for a great variety of wildlife. They carve valleys and deposit plains, being largely responsible for shaping the Earth’s continental landscapes. Rivers change progressively over their course from headwaters to mouth, from steep streams that are narrow and turbulent to wider, deeper, often meandering channels. From upstream to downstream, a continuum of change occurs: the volume of water flowing usually increases and coarse sediments grade into finer material. In its upper reaches, a river erodes its bed and banks, but this removal of earth, pebbles, and sometimes boulders gives way to the deposition of material in lower reaches. In tune with these variations in the physical characteristics of the river, changes can also be seen in the types of creatures and plants that make the river their home. […] Rivers interact with the sediments beneath the channel and with the air above. The water flowing in many rivers comes both directly from the air as rainfall – or another form of precipitation – and also from groundwater sources held in rocks and gravels beneath, both being flows of water through the hydrological cycle.”

“One interesting aspect of rivers is that they seem to be organized hierarchically. When viewed from an aircraft or on a map, rivers form distinct networks like the branches of a tree. Small tributary channels join together to form larger channels which in turn merge to form still larger rivers. This progressive increase in river size is often described using a numerical ordering scheme in which the smallest stream is called first order, the union of two first-order channels produces a second-order river, the union of two second-order channels produces a third-order river, and so on. Stream order only increases when two channels of the same rank merge. Very large rivers, such as the Nile and Mississippi, are tenth-order rivers; the Amazon twelfth order. Each river drains an area of land that is proportional to its size. This area is known by several different terms: drainage basin, river basin, or catchment (‘watershed’ is also used in American English, but this word means the drainage divide between two adjacent basins in British English). In the same way that a river network is made up of a hierarchy of low-order rivers nested within higher-order rivers, their drainage basins also fit together to form a nested hierarchy. In other words, smaller units are repeating elements nested within larger units. All of these units are linked by flows of water, sediment, and energy. Recognizing rivers as being made up of a series of units that are arranged hierarchically provides a potent framework in which to study the patterns and processes associated with rivers. […] processes operating at the upper levels of the hierarchy exert considerable influence over features lower down in the hierarchy, but not the other way around. […] Generally, the larger the spatial scale, the slower the processes and rates of change.”

The stuff above incidentally – and curiously – links very closely with the material covered in Holland’s book on complexity, which I finished just the day before I started reading this one. That book has a lot more stuff about things like nested hierarchies and that ‘potent framework’ mentioned above, and how to go about analyzing such things. (I found that book hard to blog – at least at first, which is why I’m right now covering this book instead; but I do hope to get to it later, it was quite interesting).

“Measuring the length of a river is more complicated than it sounds. […] Disagreements about the true source of many rivers have been a continuous feature of [the] history of exploration. […] most rivers typically have many tributaries and hence numerous sources. […] But it gets more confusing. Some rivers do not have a mouth. […] Some rivers have more than one channel. […] Yet another important part of measuring the length of a river is the scale at which it is measured. Fundamentally, the length of a river varies with the map scale because different amounts of detail are generalized at different scales.”

“Two particularly important properties of river flow are velocity and discharge – the volume of water moving past a point over some interval of time […]. A continuous record of discharge plotted against time is called a hydrograph which, depending on the time frame chosen, may give a detailed depiction of a flood event over a few days, or the discharge pattern over a year or more. […] River flow is dependent upon many different factors, including the area and shape of the drainage basin. If all else is equal, larger basins experience larger flows. A river draining a circular basin tends to have a peak in flow because water from all its tributaries arrives at more or less the same time as compared to a river draining a long, narrow basin in which water arrives from tributaries in a more staggered manner. The surface conditions in a basin are also important. Vegetation, for example, intercepts rainfall and hence slows down its movement into rivers. Climate is a particularly significant determinant of river flow. […] All the rivers with the greatest flows are almost entirely located in the humid tropics, where rainfall is abundant throughout the year. […] Rivers in the humid tropics experience relatively constant flows throughout the year, but perennial rivers in more seasonal climates exhibit marked seasonality in flow. […] Some rivers are large enough to flow through more than one climate region. Some desert rivers, for instance, are perennial because they receive most of their flow from high rainfall areas outside the desert. These are known as ‘exotic’ rivers. The Nile is an example […]. These rivers lose large amounts of water – by evaporation and infiltration into soils – while flowing through the desert, but their volumes are such that they maintain their continuity and reach the sea. By contrast, many exotic desert rivers do not flow into the sea but deliver their water to interior basins.”

…and in rare cases, so much water is contributed to the interior basin that that basin’s actually categorized as a ‘sea’. However humans tend to mess such things up. Amu Darya and Syr Darya used to flow into the Aral Sea, until Soviet planners decided they shouldn’t do that anymore. Goodbye Aral Sea – hello Aralkum Desert!

“An important measure of the way a river system moulds its landscape is the ‘drainage density’. This is the sum of the channel length divided by the total area drained, which reflects the spacing of channels. Hence, drainage density expresses the degree to which a river dissects the landscape, effectively controlling the texture of relief. Numerous studies have shown that drainage density has a great range in different regions, depending on conditions of climate, vegetation, and geology particularly. […] Rivers shape the Earth’s continental landscapes in three main ways: by the erosion, transport, and deposition of sediments. These three processes have been used to recognize a simple three-part classification of individual rivers and river networks according to the dominant process in each of three areas: source, transfer, and depositional zones. The first zone consists of the river’s upper reaches, the area from which most of the water and sediment are derived. This is where most of the river’s erosion occurs, and this eroded material is transported through the second zone to be deposited in the third zone. These three zones are idealized because some sediment is eroded, stored, and transported in each of them, but within each zone one process is dominant.”

“The flow of water carries […] sediment in three ways: dissolved material […] moves in solution; small particles are carried in suspension; and larger particles are transported along the stream bed by rolling, sliding, or a bouncing movement known as ‘saltation’. […] Globally, it is estimated that rivers transport around 15 billion tonnes of suspended material annually to the oceans, plus about another 4 billion tonnes of dissolved material. In its upper reaches, a river might flow across bedrock but further downstream this is much less likely. Alluvial rivers are flanked by a floodplain, the channel cut into material that the river itself has transported and deposited. The floodplain is a relatively flat area which is periodically inundated during periods of high flow […] When water spills out onto the floodplain, the velocity of flow decreases and sediment begins to settle, causing fresh deposits of alluvium on the floodplain. Certain patterns of alluvial river channels have been seen on every continent and are divided at the most basic level into straight, meandering, and braided. Straight channels are rare in nature […] The most common river channel pattern is a series of bends known as meanders […]. Meanders develop because erosion becomes concentrated on the outside of a bend and deposition on the inside. As these linked processes continue, the meander bend can become more emphasized, and a particularly sinuous meander may eventually be cut off at its narrow neck, leaving an oxbow lake as evidence of its former course. Alluvial meanders migrate, both down and across their floodplain […]. This lateral migration is an important process in the formation of floodplains. Braided rivers can be recognized by their numerous flows that split off and rejoin each other to give a braided appearance. These multiple intersecting flows are separated by small and often temporary islands of alluvium. Braided rivers typically carry abundant sediment and are found in areas with a fairly steep gradient, often near mountainous regions.”

“The meander cut-off creating an oxbow lake is one way in which a channel makes an abrupt change of course, a characteristic of some alluvial rivers that is generally referred to as ‘avulsion’. It is a natural process by which flow diverts out of an established channel into a new permanent course on the adjacent floodplain, a change in course that can present a major threat to human activities. Rapid, frequent, and often significant avulsions have typified many rivers on the Indo-Gangetic plains of South Asia. In India, the Kosi River has migrated about 100 kilometres westward in the last 200 years […] Why a river suddenly avulses is not understood completely, but earthquakes play a part on the Indo-Gangetic plains. […] Most rivers eventually flow into the sea or a lake, where they deposit sediment which builds up into a landform known as a delta. The name comes from the Greek letter delta, Δ, shaped like a triangle or fan, one of the classic shapes a delta can take. […] Material laid down at the end of a river can continue underwater far beyond the delta as a deep-sea fan.”

“The organisms found in fluvial ecosystems are commonly classified according to the methods they use to gather food and feed. ‘Shredders’ are organisms that consume small sections of leaves; ‘grazers’ and ‘scrapers’ consume algae from the surfaces of objects such as stones and large plants; ‘collectors’ feed on fine organic matter produced by the breakdown of other once-living things; and ‘predators’ eat other living creatures. The relative importance of these groups of creatures typically changes as one moves from the headwaters of a river to stretches further downstream […] small headwater streams are often shaded by overhanging vegetation which limits sunlight and photosynthesis but contributes organic matter by leaf fall. Shredders and collectors typically dominate in these stretches, but further downstream, where the river is wider and thus receives more sunlight and less leaf fall, the situation is quite different. […] There’s no doubting the numerous fundamental ways in which a river’s biology is dependent upon its physical setting, particularly in terms of climate, geology, and topography. Nevertheless, these relationships also work in reverse. The biological components of rivers also act to shape the physical environment, particularly at more local scales. Beavers provide a good illustration of the ways in which the physical structure of rivers can be changed profoundly by large mammals. […] rivers can act both as corridors for species dispersal but also as barriers to the dispersal of organisms.”

 

Drainage system (geomorphology).
Perennial stream.
Nilometer.
Mekong.
Riverscape.
Oxbow lake.
Channel River.
Long profile of a river.
Bengal fan.
River continuum concept.
Flood pulse concept.
Riparian zone.

 

January 11, 2018 Posted by | Books, Ecology, Geography, Geology | Leave a comment

A few diabetes papers of interest

i. Type 2 Diabetes in the Real World: The Elusive Nature of Glycemic Control.

“Despite U.S. Food and Drug Administration (FDA) approval of over 40 new treatment options for type 2 diabetes since 2005, the latest data from the National Health and Nutrition Examination Survey show that the proportion of patients achieving glycated hemoglobin (HbA1c) <7.0% (<53 mmol/mol) remains around 50%, with a negligible decline between the periods 2003–2006 and 2011–2014. The Healthcare Effectiveness Data and Information Set reports even more alarming rates, with only about 40% and 30% of patients achieving HbA1c <7.0% (<53 mmol/mol) in the commercially insured (HMO) and Medicaid populations, respectively, again with virtually no change over the past decade. A recent retrospective cohort study using a large U.S. claims database explored why clinical outcomes are not keeping pace with the availability of new treatment options. The study found that HbA1c reductions fell far short of those reported in randomized clinical trials (RCTs), with poor medication adherence emerging as the key driver behind the disconnect. In this Perspective, we examine the implications of these findings in conjunction with other data to highlight the discrepancy between RCT findings and the real world, all pointing toward the underrealized promise of FDA-approved therapies and the critical importance of medication adherence. While poor medication adherence is not a new issue, it has yet to be effectively addressed in clinical practice — often, we suspect, because it goes unrecognized. To support the busy health care professional, innovative approaches are sorely needed.”

“To better understand the differences between usual care and clinical trial HbA1c results, multivariate regression analysis assessed the relative contributions of key biobehavioral factors, including baseline patient characteristics, drug therapy, and medication adherence (21). Significantly, the key driver was poor medication adherence, accounting for 75% of the gap […]. Adherence was defined […] as the filling of one’s diabetes prescription often enough to cover ≥80% of the time one was recommended to be taking the medication (34). By this metric, proportion of days covered (PDC) ≥80%, only 29% of patients were adherent to GLP-1 RA treatment and 37% to DPP-4 inhibitor treatment. […] These data are consistent with previous real-world studies, which have demonstrated that poor medication adherence to both oral and injectable antidiabetes agents is very common (3537). For example, a retrospective analysis [of] adults initiating oral agents in the DPP-4 inhibitor (n = 61,399), sulfonylurea (n = 134,961), and thiazolidinedione (n = 42,012) classes found that adherence rates, as measured by PDC ≥80% at the 1-year mark after the initial prescription, were below 50% for all three classes, at 47.3%, 41.2%, and 36.7%, respectively (36). Rates dropped even lower at the 2-year follow-up (36)”

“Our current ability to assess adherence and persistence is based primarily on review of pharmacy records, which may underestimate the extent of the problem. For example, using the definition of adherence of the Centers for Medicare & Medicaid Services — PDC ≥80% — a patient could miss up to 20% of days covered and still be considered adherent. In retrospective studies of persistence, the permissible gap after the last expected refill date often extends up to 90 days (39,40). Thus, a patient may have a gap of up to 90 days and still be considered persistent.

Additionally, one must also consider the issue of primary nonadherence; adherence and persistence studies typically only include patients who have completed a first refill. A recent study of e-prescription data among 75,589 insured patients found that nearly one-third of new e-prescriptions for diabetes medications were never filled (41). Finally, none of these measures take into account if the patient is actually ingesting or injecting the medication after acquiring his or her refills.”

“Acknowledging and addressing the problem of poor medication adherence is pivotal because of the well-documented dire consequences: a greater likelihood of long-term complications, more frequent hospitalizations, higher health care costs, and elevated mortality rates (4245). In patients younger than 65, hospitalization risk in one study (n = 137,277) was found to be 30% at the lowest level of adherence to antidiabetes medications (1–19%) versus 13% at the highest adherence quintile (80–100%) […]. In patients over 65, a separate study (n = 123,235) found that all-cause hospitalization risk was 37.4% in adherent cohorts (PDC ≥80%) versus 56.2% in poorly adherent cohorts (PDC <20%) (45). […] Furthermore, for every 1,000 patients who increased adherence to their antidiabetes medications by just 1%, the total medical cost savings was estimated to be $65,464 over 3 years (45). […] “for reasons that are still unclear, the N.A. [North American] patient groups tend to have lower compliance and adherence compared to global rates during large cardiovascular studies” (46,47).”

“There are many potential contributors to poor medication adherence, including depressive affect, negative treatment perceptions, lack of patient-physician trust, complexity of the medication regimen, tolerability, and cost (48). […] A recent review of interventions addressing problematic medication adherence in type 2 diabetes found that few strategies have been shown consistently to have a marked positive impact, particularly with respect to HbA1c lowering, and no single intervention was identified that could be applied successfully to all patients with type 2 diabetes (53). Additional evidence indicates that improvements resulting from the few effective interventions, such as pharmacy-based counseling or nurse-managed home telemonitoring, often wane once the programs end (54,55). We suspect that the efficacy of behavioral interventions to address medication adherence will continue to be limited until there are more focused efforts to address three common and often unappreciated patient obstacles. First, taking diabetes medications is a burdensome and often difficult activity for many of our patients. Rather than just encouraging patients to do a better job of tolerating this burden, more work is needed to make the process easier and more convenient. […] Second, poor medication adherence often represents underlying attitudinal problems that may not be a strictly behavioral issue. Specifically, negative beliefs about prescribed medications are pervasive among patients, and behavioral interventions cannot be effective unless these beliefs are addressed directly (35). […] Third, the issue of access to medications remains a primary concern. A study by Kurlander et al. (51) found that patients selectively forgo medications because of cost; however, noncost factors, such as beliefs, satisfaction with medication-related information, and depression, are also influential.”

ii. Diabetes Research and Care Through the Ages. An overview article which might be of interest especially to people who’re not much familiar with the history of diabetes research and -treatment (a topic which is also very nicely covered in Tattersall’s book). Despite including a historical review of various topics, it also includes many observations about e.g. current (and future?) practice. Some random quotes:

“Arnoldo Cantani established a new strict level of treatment (9). He isolated his patients “under lock and key, and allowed them absolutely no food but lean meat and various fats. In the less severe cases, eggs, liver, and shell-fish were permitted. For drink the patients received water, plain or carbonated, and dilute alcohol for those accustomed to liquors, the total fluid intake being limited to one and one-half to two and one-half liters per day” (6).

Bernhard Naunyn encouraged a strict carbohydrate-free diet (6,10). He locked patients in their rooms for 5 months when necessary for “sugar-freedom” (6).” […let’s just say that treatment options have changed slightly over time – US]

“The characteristics of insulin preparations include the purity of the preparation, the concentration of insulin, the species of origin, and the time course of action (onset, peak, duration) (25). From the 1930s to the early 1950s, one of the major efforts made was to develop an insulin with extended action […]. Most preparations contained 40 (U-40) or 80 (U-80) units of insulin per mL, with U-10 and U-20 eliminated in the early 1940s. U-100 was introduced in 1973 and was meant to be a standard concentration, although U-500 had been available since the early 1950s for special circumstances. Preparations were either of mixed beef and pork origin, pure beef, or pure pork. There were progressive improvements in the purity of preparations as chemical techniques improved. Prior to 1972, conventional preparations contained 8% noninsulin proteins. […] In the early 1980s, “human” insulins were introduced (26). These were made either by recombinant DNA technology in bacteria (Escherichia coli) or yeast (Saccharomyces cerevisiae) or by enzymatic conversion of pork insulin to human insulin, since pork differed by only one amino acid from human insulin. The powerful nature of recombinant DNA technology also led to the development of insulin analogs designed for specific effects. These include rapid-acting insulin analogs and basal insulin analogs.”

“Until 1996, the only oral medications available were biguanides and sulfonylureas. Since that time, there has been an explosion of new classes of oral and parenteral preparations. […] The management of type 2 diabetes (T2D) has undergone rapid change with the introduction of several new classes of glucose-lowering therapies. […] the treatment guidelines are generally clear in the context of using metformin as the first oral medication for T2D and present a menu approach with respect to the second and third glucose-lowering medication (3032). In order to facilitate this decision, the guidelines list the characteristics of each medication including side effects and cost, and the health care provider is expected to make a choice that would be most suited for patient comorbidities and health care circumstances. This can be confusing and contributes to the clinical inertia characteristic of the usual management of T2D (33).”

“Perhaps the most frustrating barrier to optimizing diabetes management is the frequent occurrence of clinical inertia (whenever the health care provider does not initiate or intensify therapy appropriately and in a timely fashion when therapeutic goals are not reached). More broadly, the failure to advance therapy in an appropriate manner can be traced to physician behaviors, patient factors, or elements of the health care system. […] Despite clear evidence from multiple studies, health care providers fail to fully appreciate that T2D is a progressive disease. T2D is associated with ongoing β-cell failure and, as a consequence, we can safely predict that for the majority of patients, glycemic control will deteriorate with time despite metformin therapy (35). Continued observation and reinforcement of the current therapeutic regimen is not likely to be effective. As an example of real-life clinical inertia for patients with T2D on monotherapy metformin and an HbA1c of 7 to <8%, it took on the average 19 months before additional glucose-lowering therapy was introduced (36). The fear of hypoglycemia and weight gain are appropriate concerns for both patient and physician, but with newer therapies these undesirable effects are significantly diminished. In addition, health care providers must appreciate that achieving early and sustained glycemic control has been demonstrated to have long-term benefits […]. Clinicians have been schooled in the notion of a stepwise approach to therapy and are reluctant to initiate combination therapy early in the course of T2D, even if the combination intervention is formulated as a fixed-dose combination. […] monotherapy metformin failure rates with a starting HbA1c >7% are ∼20% per year (35). […] To summarize the current status of T2D at this time, it should be clearly emphasized that, first and foremost, T2D is characterized by a progressive deterioration of glycemic control. A stepwise medication introduction approach results in clinical inertia and frequently fails to meet long-term treatment goals. Early/initial combination therapies that are not associated with hypoglycemia and/or weight gain have been shown to be safe and effective. The added value of reducing CV outcomes with some of these newer medications should elevate them to a more prominent place in the treatment paradigm.”

iii. Use of Adjuvant Pharmacotherapy in Type 1 Diabetes: International Comparison of 49,996 Individuals in the Prospective Diabetes Follow-up and T1D Exchange Registries.

“The majority of those with type 1 diabetes (T1D) have suboptimal glycemic control (14); therefore, use of adjunctive pharmacotherapy to improve control has been of clinical interest. While noninsulin medications approved for type 2 diabetes have been reported in T1D research and clinical practice (5), little is known about their frequency of use. The T1D Exchange (T1DX) registry in the U.S. and the Prospective Diabetes Follow-up (DPV) registry in Germany and Austria are two large consortia of diabetes centers; thus, they provide a rich data set to address this question.

For the analysis, 49,996 pediatric and adult patients with diabetes duration ≥1 year and a registry update from 1 April 2015 to 1 July 2016 were included (19,298 individuals from 73 T1DX sites and 30,698 individuals from 354 DPV sites). Adjuvant medication use (metformin, glucagon-like peptide 1 [GLP-1] receptor agonists, dipeptidyl peptidase 4 [DPP-4] inhibitors, sodium–glucose cotransporter 2 [SGLT2] inhibitors, and other noninsulin diabetes medications including pramlintide) was extracted from participant medical records. […] Adjunctive agents, whose proposed benefits may include the ability to improve glycemic control, reduce insulin doses, promote weight loss, and suppress dysregulated postprandial glucagon secretion, have had little penetrance as part of the daily medical regimen of those in the registries studied. […] The use of any adjuvant medication was 5.4% in T1DX and 1.6% in DPV (P < 0.001). Metformin was the most commonly reported medication in both registries, with 3.5% in the T1DX and 1.3% in the DPV (P < 0.001). […] Use of adjuvant medication was associated with older age, higher BMI, and longer diabetes duration in both registries […] it is important to note that registry data did not capture the intent of adjuvant medications, which may have been to treat polycystic ovarian syndrome in women […here’s a relevant link, US].”

iv. Prevalence of and Risk Factors for Diabetic Peripheral Neuropathy in Youth With Type 1 and Type 2 Diabetes: SEARCH for Diabetes in Youth Study. I recently covered a closely related paper here (paper # 2) but the two papers cover different data sets so I decided it would be worth including this one in this post anyway. Some quotes:

“We previously reported results from a small pilot study comparing the prevalence of DPN in a subset of youth enrolled in the SEARCH for Diabetes in Youth (SEARCH) study and found that 8.5% of 329 youth with T1D (mean ± SD age 15.7 ± 4.3 years and diabetes duration 6.2 ± 0.9 years) and 25.7% of 70 youth with T2D (age 21.6 ± 4.1 years and diabetes duration 7.6 ± 1.8 years) had evidence of DPN (9). […this is the paper I previously covered here, US] Recently, we also reported the prevalence of microvascular and macrovascular complications in youth with T1D and T2D in the entire SEARCH cohort (10).

In the current study, we examined the cross-sectional and longitudinal risk factors for DPN. The aims were 1) to estimate prevalence of DPN in youth with T1D and T2D, overall and by age and diabetes duration, and 2) to identify risk factors (cross-sectional and longitudinal) associated with the presence of DPN in a multiethnic cohort of youth with diabetes enrolled in the SEARCH study.”

“The SEARCH Cohort Study enrolled 2,777 individuals. For this analysis, we excluded participants aged <10 years (n = 134), those with no antibody measures for etiological definition of diabetes (n = 440), and those with incomplete neuropathy assessment […] (n = 213), which reduced the analysis sample size to 1,992 […] There were 1,734 youth with T1D and 258 youth with T2D who participated in the SEARCH study and had complete data for the variables of interest. […] Seven percent of the participants with T1D and 22% of those with T2D had evidence of DPN.”

“Among youth with T1D, those with DPN were older (21 vs. 18 years, P < 0.0001), had a longer duration of diabetes (8.7 vs. 7.8 years, P < 0.0001), and had higher DBP (71 vs. 69 mmHg, P = 0.02), BMI (26 vs. 24 kg/m2, P < 0.001), and LDL-c levels (101 vs. 96 mg/dL, P = 0.01); higher triglycerides (85 vs. 74 mg/dL, P = 0.005); and lower HDL-c levels (51 vs. 55 mg/dL, P = 0.01) compared to those without DPN. The prevalence of DPN was 5% among nonsmokers vs. 10% among the current and former smokers (P = 0.001). […] Among youth with T2D, those with DPN were older (23 vs. 22 years, P = 0.01), had longer duration of diabetes (8.6 vs. 7.6 years; P = 0.002), and had lower HDL-c (40 vs. 43 mg/dL, P = 0.04) compared with those without DPN. The prevalence of DPN was higher among males than among females: 30% of males had DPN compared with 18% of females (P = 0.02). The prevalence of DPN was twofold higher in current smokers (33%) compared with nonsmokers (15%) and former smokers (17%) (P = 0.01). […] [T]he prevalence of DPN was further assessed by 5-year increment of diabetes duration in individuals with T1D or T2D […]. There was an approximately twofold increase in the prevalence of DPN with an increase in duration of diabetes from 5–10 years to >10 years for both the T1D group (5–13%) (P < 0.0001) and the T2D group (19–36%) (P = 0.02). […] in an unadjusted logistic regression model, youth with T2D were four times more likely to develop DPN compared with those with T1D, and though this association was attenuated, it remained significant independent of age, sex, height, and glycemic control (OR 2.99 [1.91; 4.67], P < 0.001)”.

“The prevalence estimates for DPN found in our study for youth with T2D are similar to those in the Australian cohort (8) but lower for youth with T1D than those reported in the Danish (7) and Australian (8) cohorts. The nationwide Danish Study Group for Diabetes in Childhood reported a prevalence of 62% among 339 adolescents and youth with T1D (age 12–27 years, duration 9–25 years, and HbA1c 9.7 ± 1.7%) using the vibration perception threshold to assess DPN (7). The higher prevalence in this cohort compared with ours (62 vs. 7%) could be due to the longer duration of diabetes (9–25 vs. 5–13 years) and reliance on a single measure of neuropathy (vibration perception threshold) as opposed to our use of the MNSI, which includes vibration as well as other indicators of neuropathy. In the Australian study, Eppens et al. (8) reported abnormalities in peripheral nerve function in 27% of the 1,433 adolescents with T1D (median age 15.7 years, median diabetes duration 6.8 years, and mean HbA1c 8.5%) and 21% of the 68 adolescents with T2D (median age 15.3 years, median diabetes duration 1.3 years, and mean HbA1c 7.3%) based on thermal and vibration perception threshold. These data are thus reminiscent of the persistent inconsistencies in the definition of DPN, which are reflected in the wide range of prevalence estimates being reported.”

“The alarming rise in rates of DPN for every 5-year increase in duration, coupled with poor glycemic control and dyslipidemia, in this cohort reinforces the need for clinicians rendering care to youth with diabetes to be vigilant in screening for DPN and identifying any risk factors that could potentially be modified to alter the course of the disease (2830). The modifiable risk factors that could be targeted in this young population include better glycemic control, treatment of dyslipidemia, and smoking cessation (29,30) […]. The sharp increase in rates of DPN over time is a reminder that DPN is one of the complications of diabetes that must be a part of the routine annual screening for youth with diabetes.”

v. Diabetes and Hypertension: A Position Statement by the American Diabetes Association.

“Hypertension is common among patients with diabetes, with the prevalence depending on type and duration of diabetes, age, sex, race/ethnicity, BMI, history of glycemic control, and the presence of kidney disease, among other factors (13). Furthermore, hypertension is a strong risk factor for atherosclerotic cardiovascular disease (ASCVD), heart failure, and microvascular complications. ASCVD — defined as acute coronary syndrome, myocardial infarction (MI), angina, coronary or other arterial revascularization, stroke, transient ischemic attack, or peripheral arterial disease presumed to be of atherosclerotic origin — is the leading cause of morbidity and mortality for individuals with diabetes and is the largest contributor to the direct and indirect costs of diabetes. Numerous studies have shown that antihypertensive therapy reduces ASCVD events, heart failure, and microvascular complications in people with diabetes (48). Large benefits are seen when multiple risk factors are addressed simultaneously (9). There is evidence that ASCVD morbidity and mortality have decreased for people with diabetes since 1990 (10,11) likely due in large part to improvements in blood pressure control (1214). This Position Statement is intended to update the assessment and treatment of hypertension among people with diabetes, including advances in care since the American Diabetes Association (ADA) last published a Position Statement on this topic in 2003 (3).”

“Hypertension is defined as a sustained blood pressure ≥140/90 mmHg. This definition is based on unambiguous data that levels above this threshold are strongly associated with ASCVD, death, disability, and microvascular complications (1,2,2427) and that antihypertensive treatment in populations with baseline blood pressure above this range reduces the risk of ASCVD events (46,28,29). The “sustained” aspect of the hypertension definition is important, as blood pressure has considerable normal variation. The criteria for diagnosing hypertension should be differentiated from blood pressure treatment targets.

Hypertension diagnosis and management can be complicated by two common conditions: masked hypertension and white-coat hypertension. Masked hypertension is defined as a normal blood pressure in the clinic or office (<140/90 mmHg) but an elevated home blood pressure of ≥135/85 mmHg (30); the lower home blood pressure threshold is based on outcome studies (31) demonstrating that lower home blood pressures correspond to higher office-based measurements. White-coat hypertension is elevated office blood pressure (≥140/90 mmHg) and normal (untreated) home blood pressure (<135/85 mmHg) (32). Identifying these conditions with home blood pressure monitoring can help prevent overtreatment of people with white-coat hypertension who are not at elevated risk of ASCVD and, in the case of masked hypertension, allow proper use of medications to reduce side effects during periods of normal pressure (33,34).”

“Diabetic autonomic neuropathy or volume depletion can cause orthostatic hypotension (35), which may be further exacerbated by antihypertensive medications. The definition of orthostatic hypotension is a decrease in systolic blood pressure of 20 mmHg or a decrease in diastolic blood pressure of 10 mmHg within 3 min of standing when compared with blood pressure from the sitting or supine position (36). Orthostatic hypotension is common in people with type 2 diabetes and hypertension and is associated with an increased risk of mortality and heart failure (37).

It is important to assess for symptoms of orthostatic hypotension to individualize blood pressure goals, select the most appropriate antihypertensive agents, and minimize adverse effects of antihypertensive therapy.”

“Taken together, […] meta-analyses consistently show that treating patients with baseline blood pressure ≥140 mmHg to targets <140 mmHg is beneficial, while more intensive targets may offer additional though probably less robust benefits. […] Overall, compared with people without diabetes, the relative benefits of antihypertensive treatment are similar, and absolute benefits may be greater (5,8,40). […] Multiple-drug therapy is often required to achieve blood pressure targets, particularly in the setting of diabetic kidney disease. However, the use of both ACE inhibitors and ARBs in combination is not recommended given the lack of added ASCVD benefit and increased rate of adverse events — namely, hyperkalemia, syncope, and acute kidney injury (7173). Titration of and/or addition of further blood pressure medications should be made in a timely fashion to overcome clinical inertia in achieving blood pressure targets. […] there is an absence of high-quality data available to guide blood pressure targets in type 1 diabetes. […] Of note, diastolic blood pressure, as opposed to systolic blood pressure, is a key variable predicting cardiovascular outcomes in people under age 50 years without diabetes and may be prioritized in younger adults (46,47). Though convincing data are lacking, younger adults with type 1 diabetes might more easily achieve intensive blood pressure levels and may derive substantial long-term benefit from tight blood pressure control.”

“Lifestyle management is an important component of hypertension treatment because it lowers blood pressure, enhances the effectiveness of some antihypertensive medications, promotes other aspects of metabolic and vascular health, and generally leads to few adverse effects. […] Lifestyle therapy consists of reducing excess body weight through caloric restriction, restricting sodium intake (<2,300 mg/day), increasing consumption of fruits and vegetables […] and low-fat dairy products […], avoiding excessive alcohol consumption […] (53), smoking cessation, reducing sedentary time (54), and increasing physical activity levels (55). These lifestyle strategies may also positively affect glycemic and lipid control and should be encouraged in those with even mildly elevated blood pressure.”

“Initial treatment for hypertension should include drug classes demonstrated to reduce cardiovascular events in patients with diabetes: ACE inhibitors (65,66), angiotensin receptor blockers (ARBs) (65,66), thiazide-like diuretics (67), or dihydropyridine CCBs (68). For patients with albuminuria (urine albumin-to-creatinine ratio [UACR] ≥30 mg/g creatinine), initial treatment should include an ACE inhibitor or ARB in order to reduce the risk of progressive kidney disease […]. In the absence of albuminuria, risk of progressive kidney disease is low, and ACE inhibitors and ARBs have not been found to afford superior cardioprotection when compared with other antihypertensive agents (69). β-Blockers may be used for the treatment of coronary disease or heart failure but have not been shown to reduce mortality as blood pressure–lowering agents in the absence of these conditions (5,70).”

vi. High Illicit Drug Abuse and Suicide in Organ Donors With Type 1 Diabetes.

“Organ donors with type 1 diabetes represent a unique population for research. Through a combination of immunological, metabolic, and physiological analyses, researchers utilizing such tissues seek to understand the etiopathogenic events that result in this disorder. The Network for Pancreatic Organ Donors with Diabetes (nPOD) program collects, processes, and distributes pancreata and disease-relevant tissues to investigators throughout the world for this purpose (1). Information is also available, through medical records of organ donors, related to causes of death and psychological factors, including drug use and suicide, that impact life with type 1 diabetes.

We reviewed the terminal hospitalization records for the first 100 organ donors with type 1 diabetes in the nPOD database, noting cause, circumstance, and mechanism of death; laboratory results; and history of illicit drug use. Donors were 45% female and 79% Caucasian. Mean age at time of death was 28 years (range 4–61) with mean disease duration of 16 years (range 0.25–52).”

“Documented suicide was found in 8% of the donors, with an average age at death of 21 years and average diabetes duration of 9 years. […] Similarly, a type 1 diabetes registry from the U.K. found that 6% of subjects’ deaths were attributed to suicide (2). […] Additionally, we observed a high rate of illicit substance abuse: 32% of donors reported or tested positive for illegal substances (excluding marijuana), and multidrug use was common. Cocaine was the most frequently abused substance. Alcohol use was reported in 35% of subjects, with marijuana use in 27%. By comparison, 16% of deaths in the U.K. study were deemed related to drug misuse (2).”

“We fully recognize the implicit biases of an organ donor–based population, which may not be […’may not be’ – well, I guess that’s one way to put it! – US] directly comparable to the general population. Nevertheless, the high rate of suicide and drug use should continue to spur our energy and resources toward caring for the emotional and psychological needs of those living with type 1 diabetes. The burden of type 1 diabetes extends far beyond checking blood glucose and administering insulin.”

January 10, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Pharmacology, Psychiatry, Studies | Leave a comment

Depression (II)

I have added some more quotes from the last half of the book as well as some more links to relevant topics below.

“The early drugs used in psychiatry were sedatives, as calming a patient was probably the only treatment that was feasible and available. Also, it made it easier to manage large numbers of individuals with small numbers of staff at the asylum. Morphine, hyoscine, chloral, and later bromide were all used in this way. […] Insulin coma therapy came into vogue in the 1930s following the work of Manfred Sakel […] Sakel initially proposed this treatment as a cure for schizophrenia, but its use gradually spread to mood disorders to the extent that asylums in Britain opened so-called insulin units. […] Recovery from the coma required administration of glucose, but complications were common and death rates ranged from 1–10 per cent. Insulin coma therapy was initially viewed as having tremendous benefits, but later re-examinations have highlighted that the results could also be explained by a placebo effect associated with the dramatic nature of the process or, tragically, because deprivation of glucose supplies to the brain may have reduced the person’s reactivity because it had induced permanent damage.”

“[S]ome respected scientists and many scientific journals remain ambivalent about the empirical evidence for the benefits of psychological therapies. Part of the reticence appears to result from the lack of very large-scale clinical trials of therapies (compared to international, multi-centre studies of medication). However, a problem for therapy research is that there is no large-scale funding from big business for therapy trials […] It is hard to implement optimum levels of quality control in research studies of therapies. A tablet can have the same ingredients and be prescribed in almost exactly the same way in different treatment centres and different countries. If a patient does not respond to this treatment, the first thing we can do is check if they receive the right medication in the correct dose for a sufficient period of time. This is much more difficult to achieve with psychotherapy and fuels concerns about how therapy is delivered and potential biases related to researcher allegiance (i.e. clinical centres that invent a therapy show better outcomes than those that did not) and generalizability (our ability to replicate the therapy model exactly in a different place with different therapists). […] Overall, the ease of prescribing a tablet, the more traditional evidence-base for the benefits of medication, and the lack of availability of trained therapists in some regions means that therapy still plays second fiddle to medications in the majority of treatment guidelines for depression. […] The mainstay of treatments offered to individuals with depression has changed little in the last thirty to forty years. Antidepressants are the first-line intervention recommended in most clinical guidelines”.

“[W]hilst some cases of mild–moderate depression can benefit from antidepressants (e.g. chronic mild depression of several years’ duration can often respond to medication), it is repeatedly shown that the only group who consistently benefit from antidepressants are those with severe depression. The problem is that in the real world, most antidepressants are actually prescribed for less severe cases, that is, the group least likely to benefit; which is part of the reason why the argument about whether antidepressants work is not going to go away any time soon.”

“The economic argument for therapy can only be sustained if it is shown that the long-term outcome of depression (fewer relapses and better quality of life) is improved by receiving therapy instead of medication or by receiving both therapy and medication. Despite claims about how therapies such as CBT, behavioural activation, IPT, or family therapy may work, the reality is that many of the elements included in these therapies are the same as elements described in all the other effective therapies (sometimes referred to as empirically supported therapies). The shared elements include forming a positive working alliance with the depressed person, sharing the model and the plan for therapy with the patient from day one, and helping the patient engage in active problem-solving, etc. Given the degree of overlap, it is hard to make a real case for using one empirically supported therapy instead of another. Also, there are few predictors (besides symptom severity and personal preference) that consistently show who will respond to one of these therapies rather than to medication. […] One of the reasons for some scepticism about the value of therapies for treating depression is that it has proved difficult to demonstrate exactly what mediates the benefits of these interventions. […] despite the enthusiasm for mindfulness, there were fewer than twenty high-quality research trials on its use in adults with depression by the end of 2015 and most of these studies had fewer than 100 participants. […] exercise improves the symptoms of depression compared to no treatment at all, but the currently available studies on this topic are less than ideal (with many problems in the design of the study or sample of participants included in the clinical trial). […] Exercise is likely to be a better option for those individuals whose mood improves from participating in the experience, rather than someone who is so depressed that they feel further undermined by the process or feel guilty about ‘not trying hard enough’ when they attend the programme.”

“Research […] indicates that treatment is important and a study from the USA in 2005 showed that those who took the prescribed antidepressant medications had a 20 per cent lower rate of absenteeism than those who did not receive treatment for their depression. Absence from work is only one half of the depression–employment equation. In recent times, a new concept ‘presenteeism’ has been introduced to try to describe the problem of individuals who are attending their place of work but have reduced efficiency (usually because their functioning is impaired by illness). As might be imagined, presenteeism is a common issue in depression and a study in the USA in 2007 estimated that a depressed person will lose 5–8 hours of productive work every week because the symptoms they experience directly or indirectly impair their ability to complete work-related tasks. For example, depression was associated with reduced productivity (due to lack of concentration, slowed physical and mental functioning, loss of confidence), and impaired social functioning”.

“Health economists do not usually restrict their estimates of the cost of a disorder simply to the funds needed for treatment (i.e. the direct health and social care costs). A comprehensive economic assessment also takes into account the indirect costs. In depression these will include costs associated with employment issues (e.g. absenteeism and presenteeism; sickness benefits), costs incurred by the patient’s family or significant others (e.g. associated with time away from work to care for someone), and costs arising from premature death such as depression-related suicides (so-called mortality costs). […] Studies from around the world consistently demonstrate that the direct health care costs of depression are dwarfed by the indirect costs. […] Interestingly, absenteeism is usually estimated to be about one-quarter of the costs of presenteeism.”

Jakob Klaesi. António Egas Moniz. Walter Jackson Freeman II.
Electroconvulsive therapy.
Psychosurgery.
Vagal nerve stimulation.
Chlorpromazine. Imipramine. Tricyclic antidepressant. MAOIs. SSRIs. John CadeMogens Schou. Lithium carbonate.
Psychoanalysis. CBT.
Thomas Szasz.
Initial Severity and Antidepressant Benefits: A Meta-Analysis of Data Submitted to the Food and Drug Administration (Kirsch et al.).
Chronobiology. Chronobiotics. Melatonin.
Eric Kandel. BDNF.
The global burden of disease (Murray & Lopez) (the author discusses some of the data included in that publication).

January 8, 2018 Posted by | Books, Health Economics, Medicine, Pharmacology, Psychiatry, Psychology | Leave a comment

Endocrinology (part I – thyroid)

Handbooks like these are difficult to blog, but I decided to try anyway. The first 100 pages or so of the book deals with the thyroid gland. Some observations of interest below.

“Biosynthesis of thyroid hormones requires iodine as substrate. […] The thyroid is the only source of T4. The thyroid secretes 20% of circulating T3; the remainder is generated in extraglandular tissues by the conversion of T4 to T3 […] In the blood, T4 and T3 are almost entirely bound to plasma proteins. […] Only the free or unbound hormone is available to tissues. The metabolic state correlates more closely with the free than the total hormone concentration in the plasma. The relatively weak binding of T3 accounts for its more rapid onset and offset of action. […] The levels of thyroid hormone in the blood are tightly controlled by feedback mechanisms involved in the hypothalamo-pituitary-thyroid (HPT) axis“.

“Annual check of thyroid function [is recommended] in the annual review of diabetic patients.”

“The term thyrotoxicosis denotes the clinical, physiological, and biochemical findings that result when the tissues are exposed to excess thyroid hormone. It can arise in a variety of ways […] It is essential to establish a specific diagnosis […] The term hyperthyroidism should be used to denote only those conditions in which hyperfunction of the thyroid leads to thyrotoxicosis. […] [Thyrotoxicosis is] 10 x more common in ♀ than in ♂ in the UK. Prevalence is approximately 2% of the ♀ population. […] Subclinical hyperthyroidism is defined as low serum thyrotropin (TSH) concentration in patients with normal levels of T4 and T3. Subtle symptoms and signs of thyrotoxicosis may be present. […] There is epidemiological evidence that subclinical hyperthyroidism is a risk factor for the development of atrial fibrillation or osteoporosis.1 Meta-analyses suggest a 41% increase in all-cause mortality.2 […] Thyroid crisis [storm] represents a rare, but life-threatening, exacerbation of the manifestations of thyrotoxicosis. […] the condition is associated with a significant mortality (30-50%, depending on series) […]. Thyroid crisis develops in hyperthyroid patients who: *Have an acute infection. *Undergo thyroidal or non-thyroidal surgery or (rarely) radioiodine treatment.”

“[Symptoms and signs of hyperthyroidism (all forms):] *Hyperactivity, irritability, altered mood, insomnia. *Heat intolerance, sweating. […] *Fatigue, weakness. *Dyspnoea. *Weight loss with appetite (weight gain in 10% of patients). *Pruritus. […] *Thirst and polyuria. *Oligomenorrhoea or amenorrhoea, loss of libido, erectile dysfunction (50% of men may have sexual dysfunction). *Warm, moist skin. […] *Hair loss. *Muscle weakness and wasting. […] Manifestations of Graves’s disease (in addition to [those factors already mentioned include:]) *Diffuse goitre. *Ophthalmopathy […] A feeling of grittiness and discomfort in the eye. *Retrobulbar pressure or pain, eyelid lag or retraction. […] *Exophthalmos (proptosis) […] Optic neuropathy.”

“Two alternative regimens are practiced for Graves’s disease: dose titration and block and replace. […] The [primary] aim [of the dose titration regime] is to achieve a euthyroid state with relatively high drug doses and then to maintain euthyroidism with a low stable dose. […] This regimen has a lower rate of side effects than the block and replace regimen. The treatment is continued for 18 months, as this appears to represent the length of therapy which is generally optimal in producing the remission rate of up to 40% at 5 years after discontinuing therapy. *Relapses are most likely to occur within the first year […] Men have a higher recurrence rate than women. *Patients with multinodular goitres and thyrotoxicosis always relapse on cessation of antithyroid medication, and definite treatment with radioiodine or surgery is usually advised. […] Block and replace regimen *After achieving a euthyroid state on carbimazole alone, carbimazole at a dose of 40mg daily, together with T4 at a dose of 100 micrograms, can be prescribed. This is usually continued for 6 months. *The main advantages are fewer hospital visits for checks of thyroid function and shorter duration of treatment.”

“Radioiodine treatment[:] Indications: *Definite treatment of multinodular goitre or adenoma. *Relapsed Graves’s disease. […] *Radioactive iodine-131 is administered orally as a capsule or a drink. *There is no universal agreement regarding the optimal dose. […] The recommendation is to administer enough radioiodine to achieve euthyroidism, with the acceptance of a moderate rate of hypothyroidism, e.g. 15-20% at 2 years. […] In general, 50-70% of patients have restored normal thyroid function within 6-8 weeks of receiving radioiodine. […] The prevalence of hypothyroidism is about 50% at 10 years and continues to increase thereafter.”

“Thyrotoxicosis occurs in about 0.2% of pregnancies. […] *Diagnosis of thyrotoxicosis during pregnancy may be difficult or delayed. *Physiological changes of pregnancy are similar to those of hyperthyroidism. […] 5-7% of ♀ develop biochemical evidence of thyroid dysfunction after delivery. An incidence is seen in patients with type I diabetes mellitus (25%) […] One-third of affected ♀ with post-partum thyroiditis develop symptoms of hypothyroidism […] There is a suggestion of an risk of post-partum depression in those with hypothyroidism. […] *The use of iodides and radioiodine is contraindicated in pregnancy. *Surgery is rarely performed in pregnancy. It is reserved for patients not responding to ATDs [antithyroid drugs, US]. […] Hyperthyroid ♀ who want to conceive should attain euthyroidism before conception since uncontrolled hyperthyroidism is associated with an an risk of congenital abnormalities (stillbirth and cranial synostosis are the most serious complications).”

“Nodular thyroid disease denotes the presence of single or multiple palpable or non-palpable nodules within the thyroid gland. […] *Clinically apparent thyroid nodules are evident in ~5% of the UK population. […] Thyroid nodules always raise the concern of cancer, but <5% are cancerous. […] clinically detectable thyroid cancer is rare. It accounts for <1% of all cancer and <0.5% of cancer deaths. […] Thyroid cancers are commonest in adults aged 40-50 and rare in children [incidence of 0.2-5 per million per year] and adolescents. […] History should concentrate on: *An enlarging thyroid mass. *A previous history of radiation […] family history of thyroid cancer. *The development of hoarseness or dysphagia. *Nodules are more likely to be malignant in patients <20 or >60 years. *Thyroid nodules are more common in ♀ but more likely to be malignant in ♂. […] Physical findings suggestive of malignancy include a firm or hard, non-tender nodule, a recent history of enlargement, fixation to adjacent tissue, and the presence of regional lymphadenopathy. […] Thyroid nodules may be described as adenomas if the follicular cell differentiation is enclosed within a capsule; adenomatous when the lesions are circumscribed but not encapsulated. *The most common benign thyroid tumours are the nodules of multinodular goitres (colloid nodules) and follicular adenomas. […] Autonomously functioning thyroid adenomas (or nodules) are benign tumours that produce thyroid hormone. Clinically, they present as a single nodule that is hyperfunctioning […], sometimes causing hyperthyroidism.”

“Inflammation of the thyroid gland often leads to a transient thyrotoxicosis followed by hypothyroidism. Overt hypothyroidism caused by autoimmunity has two main forms: Hashimoto’s (goitrous) thyroiditis and atrophic thyroiditis. […] Hashimoto’s thyroiditis [is] [c]haracterized by a painless, variable-sized goitre with rubbery consistency and an irregular surface. […] Occasionally, patients present with thyrotoxicosis in association with a thyroid gland that is unusually firm […] Atrophic thyroiditis [p]robably indicates end-stage thyroid disease. These patients do not have goitre and are antibody [positive]. […] The long-term prognosis of patients with chronic thyroiditis is good because hypothyroidism can easily be corrected with T4 and the goitre is usually not of sufficient size to cause local symptoms. […] there is an association between this condition and thyroid lymphoma (rare, but risk by a factor of 70).”

“Hypothyroidism results from a variety of abnormalities that cause insufficient secretion of thyroid hormones […] The commonest cause is autoimmune thyroid disease. Myxoedema is severe hypothyroidism [which leads to] thickening of the facial features and a doughy induration of the skin. [The clinical picture of hypothyroidism:] *Insidious, non-specific onset. *Fatigue, lethargy, constipation, cold intolerance, muscle stiffness, cramps, carpal tunnel syndrome […] *Slowing of intellectual and motor activities. *↓ appetite and weight gain. *Dry skin; hair loss. […] [The term] [s]ubclinical hypothyroidism […] is used to denote raised TSH levels in the presence of normal concentrations of free thyroid hormones. *Treatment is indicated if the biochemistry is sustained in patients with a past history of radioiodine treatment for thyrotoxicosis or [positive] thyroid antibodies as, in these situations, progression to overt hypothyroidism is almost inevitable […] There is controversy over the advantages of T4 treatment in patients with [negative] thyroid antibodies and no previous radioiodine treatment. *If treatment is not given, follow-up with annual thyroid function tests is important. *There is no generally accepted consensus of when patients should receive treatment. […] *Thyroid hormone replacement with synthetic levothyroxine remains the treatment of choice in primary hypothyroidism. […] levothyroxine has a narrow therapeutic index […] Elevated TSH despite thyroxine replacement is common, most usually due to lack of compliance.”

 

January 8, 2018 Posted by | Books, Cancer/oncology, Diabetes, Medicine, Ophthalmology, Pharmacology | Leave a comment

Depression (I)

Below I have added some quotes and links related to the first half of this book.

Quotes:

“One of the problems encountered in any discussion of depression is that the word is used to mean different things by different people. For many members of the public, the term depression is used to describe normal sadness. In clinical practice, the term depression can be used to describe negative mood states, which are symptoms that can occur in a range of illnesses (e.g. individuals with psychosis may also report depressed mood). However, the term depression can also be used to refer to a diagnosis. When employed in this way it is meant to indicate that a cluster of symptoms have all occurred together, with the most common changes being in mood, thoughts, feelings, and behaviours. Theoretically, all these symptoms need to be present to make a diagnosis of depressive disorder.”

“The absence of any laboratory tests in psychiatry means that the diagnosis of depression relies on clinical judgement and the recognition of patterns of symptoms. There are two main problems with this. First, the diagnosis represents an attempt to impose a ‘present/absent’ or ‘yes/no’ classification on a problem that, in reality, is dimensional and varies in duration and severity. Also, many symptoms are likely to show some degree of overlap with pre-existing personality traits. Taken together, this means there is an ongoing concern about the point at which depression or depressive symptoms should be regarded as a mental disorder, that is, where to situate the dividing line on a continuum from health to normal sadness to illness. Second, for many years, there was a lack of consistent agreement on what combination of symptoms and impaired functioning would benefit from clinical intervention. This lack of consensus on the threshold for treatment, or for deciding which treatment to use, is a major source of problems to this day. […] A careful inspection of the criteria for identifying a depressive disorder demonstrates that diagnosis is mainly reliant on the cross-sectional assessment of the way the person presents at that moment in time. It is also emphasized that the current presentation should represent a change from the person’s usual state, as this step helps to begin the process of differentiating illness episodes from long-standing personality traits. Clarifying the longitudinal history of any lifetime problems can help also to establish, for example, whether the person has previously experienced mania (in which case their diagnosis will be revised to bipolar disorder), or whether they have a history of chronic depression, with persistent symptoms that may be less severe but are nevertheless very debilitating (this is usually called dysthymia). In addition, it is important to assess whether the person has another mental or physical disorder as well as these may frequently co-occur with depression. […] In the absence of diagnostic tests, the current classifications still rely on expert consensus regarding symptom profiles.”

“In summary, for a classification system to have utility it needs to be reliable and valid. If a diagnosis is reliable doctors will all make the same diagnosis when they interview patients who present with the same set of symptoms. If a diagnosis has predictive validity it means that it is possible to forecast the future course of the illness in individuals with the same diagnosis and to anticipate their likely response to different treatments. For many decades, the lack of reliability so undermined the credibility of psychiatric diagnoses that most of the revisions of the classification systems between the 1950s and 2010 focused on improving diagnostic reliability. However, insufficient attention has been given to validity and until this is improved, the criteria used for diagnosing depressive disorders will continue to be regarded as somewhat arbitrary […]. Weaknesses in the systems for the diagnosis and classification of depression are frequently raised in discussions about the existence of depression as a separate entity and concerns about the rationale for treatment. It is notable that general medicine uses a similar approach to making decisions regarding the health–illness dimension. For example, levels of blood pressure exist on a continuum. However, when an individual’s blood pressure measurement reaches a predefined level, it is reported that the person now meets the criteria specified for the diagnosis of hypertension (high blood pressure). Depending on the degree of variation from the norm or average values for their age and gender, the person will be offered different interventions. […] This approach is widely accepted as a rational approach to managing this common physical health problem, yet a similar ‘stepped care’ approach to depression is often derided.”

“There are few differences in the nature of the symptoms experienced by men and women who are depressed, but there may be gender differences in how their distress is expressed or how they react to the symptoms. For example, men may be more likely to become withdrawn rather than to seek support from or confide in other people, they may become more outwardly hostile and have a greater tendency to use alcohol to try to cope with their symptoms. It is also clear that it may be more difficult for men to accept that they have a mental health problem and they are more likely to deny it, delay seeking help, or even to refuse help. […] becoming unemployed, retirement, and loss of a partner and change of social roles can all be risk factors for depression in men. In addition, chronic physical health problems or increasing disability may also act as a precipitant. The relationship between physical illness and depression is complex. When people are depressed they may subjectively report that their general health is worse than that of other people; likewise, people who are ill or in pain may react by becoming depressed. Certain medical problems such as an under-functioning thyroid gland (hypothyroidism) may produce symptoms that are virtually indistinguishable from depression. Overall, the rate of depression in individuals with a chronic physical disease is almost three times higher than those without such problems.”

“A long-standing problem in gathering data about suicide is that many religions and cultures regard it as a sin or an illegal act. This has had several consequences. For example, coroners and other public officials often strive to avoid identifying suspicious deaths as a suicide, meaning that the actual rates of suicide may be under-reported.”

“In Beck’s [depression] model, it is proposed that an individual’s interpretations of events or experiences are encapsulated in automatic thoughts, which arise immediately following the event or even at the same time. […] Beck suggested that these automatic thoughts occur at a conscious level and can be accessible to the individual, although they may not be actively aware of them because they are not concentrating on them. The appraisals that occur in specific situations largely determine the person’s emotional and behavioural responses […] [I]n depression, the content of a person’s thinking is dominated by negative views of themselves, their world, and their future (the so-called negative cognitive triad). Beck’s theory suggests that the themes included in the automatic thoughts are generated via the activation of underlying cognitive structures, called dysfunctional beliefs (or cognitive schemata). All individuals develop a set of rules or ‘silent assumptions’ derived from early learning experiences. Whilst automatic thoughts are momentary, event-specific cognitions, the underlying beliefs operate across a variety of situations and are more permanent. Most of the underlying beliefs held by the average individual are quite adaptive and guide our attempts to act and react in a considered way. Individuals at risk of depression are hypothesized to hold beliefs that are maladaptive and can have an unhelpful influence on them. […] faulty information processing contributes to further deterioration in a person’s mood, which sets up a vicious cycle with more negative mood increasing the risk of negative interpretations of day-to-day life experiences and these negative cognitions worsening the depressed mood. Beck suggested that the underlying beliefs that render an individual vulnerable to depression may be broadly categorized into beliefs about being helpless or unlovable. […] Beliefs about ‘the self’ seem especially important in the maintenance of depression, particularly when connected with low or variable self-esteem.”

“[U]nidimensional models, such as the monoamine hypothesis or the social origins of depression model, are important building blocks for understanding depression. However, in reality there is no one cause and no single pathway to depression and […] multiple factors increase vulnerability to depression. Whether or not someone at risk of depression actually develops the disorder is partly dictated by whether they are exposed to certain types of life events, the perceived level of threat or distress associated with those events (which in turn is influenced by cognitive and emotional reactions and temperament), their ability to cope with these experiences (their resilience or adaptability under stress), and the functioning of their biological stress-sensitivity systems (including the thresholds for switching on their body’s stress responses).”

Some links:

Humorism. Marsilio Ficino. Thomas Willis. William Cullen. Philippe Pinel. Benjamin Rush. Emil Kraepelin. Karl Leonhard. Sigmund Freud.
Depression.
Relation between depression and sociodemographic factors.
Bipolar disorder.
Postnatal depression. Postpartum psychosis.
Epidemiology of suicide. Durkheim’s typology of suicide.
Suicide methods.
Reserpine.
Neuroendocrine hypothesis of depression. HPA (Hypothalamic–Pituitary–Adrenal) axis.
Cognitive behavioral therapy.
Coping responses.
Brown & Harris (1978).
5-HTTLPR.

January 5, 2018 Posted by | Books, Medicine, Psychiatry, Psychology | Leave a comment

Books 2017

Here’s a goodreads overview of the books I read, with cover images of the books.

Below the comments here I’ve added a list of books I’ve read in 2017, as well as relevant links to blog posts and reviews. The letters ‘f’, ‘nf.’ and ‘m’ in the parentheses indicate which type of book it was; ‘f’ refers to ‘fiction’ books, ‘nf’ to ‘non-fiction’ books, and the ‘m’ category covers ‘miscellaneous’ books. The numbers in the parentheses correspond to the goodreads ratings I thought the books deserved.

I read 162 books in 2017. In terms of the typology mentioned above I read 108 fiction books, 46 non-fiction books, and 8 ‘miscellaneous’ books. These categories are slightly arbitrary, and especially the distinction between ‘miscellaneous’ and ‘fiction’ was occasionally difficult to deal with in an ideal manner; I have as a rule included all books which combine fiction and history in the fiction category, but I think it would be fair to say that ‘some books in the fiction category are more fictitious than others’. Many of the Flashman novels contain a lot of (true) history about the events taking place, and I’ve previously seen some people in all seriousness recommend these books to people who requested books about a specific historical topic (for example Flashman and the Dragon is a brilliant book to have a look at if you’re curious to know more about the Taiping Rebellion; and relatedly if you’re interested in naval warfare during the Napoleonic Wars, I believe you could do worse than having a go at the Aubrey-Maturin series).

I’ll probably continue to update this post for some time into 2018, as I’d like to provide a bit more coverage of some of the books than I already have. The current list, as I’m writing this post, includes 62 direct links to blog posts with coverage of non-fiction books, as well as links to 49 reviews on goodreads. It’s perhaps worth mentioning here that the links included in the list below are not the only ‘book-relevant posts’ I’ve written this year; a total count of the blog posts I’ve posted this year and categorized under the blog’s book category include almost 100 (94) book-related posts; a substantial proportion of the remainder of the book posts were posts in which I’d included quotes from fiction books as well as the new language/words posts in which I include new words I encounter while reading (mostly fiction-) books, in order to improve my vocabulary. Many of the non-book posts I published this year were posts covering scientific studies and lectures; I covered 26 lectures in 2017, i.e. one every fortnight on average, and 19 of the other non-lecture/non-book-related posts provided coverage of various studies. Book-related posts make up more than half of the posts I posted in 2017; the total number of posts I posted in 2017 was 177, which is quite close to one post every second day on average.

I have tried throughout the year to provide at least some coverage of the great majority of the non-fiction books I read; my intention from the start was to either blog a non-fiction book, or to add a review on goodreads about the book – unless there was some compelling reason not to review or blog the book. That’s how it’s played out. The few non-fiction books I have not (…yet?) either blogged or reviewed are 3 ‘paper books’ (Boyd & Richerson, Tainter, Browning). I have talked about how ‘paper books’ take a lot more effort to blog than do electronic books before here on the blog – see e.g. the comments included in this post.

As a rule my goodreads reviews are ‘low effort’ reviews (‘minutes’) and my blog posts are ‘high effort’ (‘hours’); there are probably a couple of exceptions below where I’ve actually spent some time on a goodreads review, but that’s the exception, not the rule.

The aforementioned goodreads overview includes the observation that I read 45.406 pages in 2017. There’s always some overlap when it comes to these things (books you start out reading one year and then only finish the next year), and some measurement error, but I don’t have any better data than what is provided to me by goodreads, so it is what it is. It corresponds to ~125 pages/day throughout the year. This is slightly less than last year (47.281 pages, ~130 pages/day). The average page count of the books I read was ~280 pages, and my average goodreads rating of the books I rated was 3.3.

Some of the fiction authors I read this year include: Rex Stout (51 books), Jim Butcher (15), George MacDonald Fraser (12 books), Ernest Bramah (6 books), Connie Willis (5 books), and James Herriot (5).

I usually like to explore how the non-fiction books I read were categorized, as it tells you something about which kinds of things I’ve read about throughout the year. One major change from earlier years is that I’ve been reading a lot more physics- and chemistry-related books than I have in previous years. This year I posted 22 posts which were categorized under both the category books and the category physics, and I posted 18 posts which included both the category books and the chemistry category; note that I’ve also posted non-book related posts on these topics, for example the total number of posts I’ve posted this year categorized under ‘physics’ is 30, as I’ve e.g. also covered some lectures from the Institute for Advanced Studies. In terms of the number of pages I read about these topics a brief count told me that I read roughly ~3000 book-pages of physics (2937) and maybe ~1700 book-pages of chemistry in 2017 (categorization here’s a bit iffy (see also this), and the page count depends a bit on which books you decide to include in the count, but it’s in that neighbourhood anyway – do incidentally note that there’s a substantial amount of overlap here). On the other hand I’ve read fewer medical textbooks than usual, even if my coverage of medical topics on the blog has not decreased substantially (it may in fact have increased, though I’m hesitant to spend time trying to clarify this); I thus posted a total of 64 posts in 2017 which I categorized under ‘medicine‘, and during the second half of the year alone I covered a total of 88 papers (but only 3 textbooks…) on the topic of diabetes.

I’ve listed the books in the order they were read.

1. Brief Candles (3, f). Manning Coles.

2. Galaxies: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.

3. Mirabile (2, f). Janet Kagan. Short goodreads review here.

4. Blackout (5, f). Connie Willis. Goodreads review here (note that this review is a ‘composite review’ of both Blackout and All Clear).

5. All Clear (5, f). Connie Willis.

6. The Laws of Thermodynamics: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.

7. A Knight of the Seven Kingdoms (3, f). George R. R. Martin. Goodreads review here.

8. The Economics of International Immigration (1, nf. Springer). Goodreads review here.

9. American Gods (2, f). Neil Gaiman. Short goodreads review here – I was not impressed.

10. The Story of the Stone (3, f). Barry Hughart. Goodreads review here.

11. Particle Physics: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here.

12. The Wallet of Kai Lung (4, f). Ernest Bramah. Goodreads review here.

13. Kai Lung’s Golden Hours (4, f). Ernest Bramah.

14. Kai Lung Unrolls His Mat (4, f). Ernest Bramah. Goodreads review here.

15. Anaesthesia: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here.

16. The Moon of Much Gladness (5, f). Ernest Bramah. Goodreads review here.

17. All Trivia – A collection of reflections & aphorisms (2, m). Logan Pearsall Smith. Short goodreads review here.

18. Rocks: A very short introduction (3, nf. Oxford University Press). Blog coverage here.

19. Kai Lung Beneath the Mulberry-Tree (4, f). Ernest Bramah.

20. Economic Analysis in Healthcare (2, nf. Wiley). Blog coverage here and here.

21. The Best of Connie Willis: Award-Winning Stories (f). Connie Willis. Goodreads review here.

22. The Winds of Marble Arch and Other Stories (f). Connie Willis. Many of the comments that applied to the book above (see my review link) also applies here (in part because a substantial number of stories are in fact included in both books).

23. Endgame (f.). Samuel Beckett. Short goodreads review here.

24. Kai Lung Raises His Voice (4, f). Ernest Bramah. Goodreads review here.

25. All Creatures Great and Small (5, m). James Herriot. Goodreads review here. I added this book to my list of favourite books on goodreads.

26. The Red House Mystery (4, f). A. A. Milne. Short goodreads review here.

27. All Things Bright and Beautiful (5, m). James Herriot. Short goodreads review here.

28. All Things Wise and Wonderful (4, m). James Herriot. Goodreads review here.

29. The Lord God Made Them All (4, m). James Herriot.

30. Every Living Thing (5, m). James Herriot. Goodreads review here.

31. The Faber Book Of Aphorisms (3, m). W. H. Auden and Louis Kronenberger. Goodreads review here.

32. Flashman (5, f). George MacDonald Fraser. Short goodreads review here.

33. Royal Flash (4, f). George MacDonald Fraser.

34. Flashman’s Lady (3, f). George MacDonald Fraser. Goodreads review here.

35. Flashman and the Mountain of Light (5, f). George MacDonald Fraser. Short goodreads review here.

36. Flash for Freedom! (3, f). George MacDonald Fraser.

37. Flashman and the Redskins (4, f). George MacDonald Fraser.

38. Biodemography of Aging: Determinants of Healthy Life Span and Longevity (5, nf. Springer). Long, takes a lot of work. I added this book to my list of favorite books on goodreads. Blog coverage here, here, here, and here.

39. Flashman at the Charge (4, f). George MacDonald Fraser.

40. Flashman in the Great Game (3, f). George MacDonald Fraser.

41. Nuclear Physics: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.

42. Fer-de-Lance (4, f). Rex Stout.

43. Computer Science: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

44. The League of Frightened Men (4, f). Rex Stout.

45. Not by Genes Alone: How Culture Transformed Human Evolution (5, nf. University Of Chicago Press). I added this book to my list of favorite books on goodreads.

46. The Rubber Band (3, f). Rex Stout.

47. The Red Box (3, f). Rex Stout.

48. Too many Cooks (3, f). Rex Stout.

49. Some Buried Caesar (4, f). Rex Stout.

50. Over My Dead Body (3, f). Rex Stout.

51. The Education of Man (1, m). Heinrich Pestalozzi. Short goodreads review here. I included some quotes from the book in this post.

52. Where There’s a Will (3, f). Rex Stout.

53. Black Orchids (3, f). Rex Stout. Goodreads review here.

54. Not Quite Dead Enough (5, f). Rex Stout. Goodreads review here.

55. The Silent Speaker (4, f). Rex Stout.

56. Astrophysics: A Very Short Introduction (2, nf. Oxford University Press). Goodreads review here. Blog coverage here.

57. Too Many Women (4, f). Rex Stout.

58. And Be a Villain (3, f). Rex Stout.

59. Trouble in Triplicate (2, f). Rex Stout. Goodreads review here.

60. The Antarctic: A Very Short Introduction (1, nf. Oxford University Press). Short goodreads review here. Blog coverage here.

61. The Second Confession (3, f). Rex Stout.

62. Three Doors to Death (3, f). Rex Stout. Very short goodreads review here.

63. In the Best Families (4, f). Rex Stout. Goodreads review here.

64. Stars: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here.

65. Curtains for Three (4, f). Rex Stout. Very short goodreads review here.

66. Murder by the Book (4, f). Rex Stout.

67. Triple Jeopardy (4, f). Rex Stout. Very short goodreads review here.

68. The Personality Puzzle (1, nf. W. W. Norton & Company). Long, but poor. Blog coverage here, here, here, and here.

69. Prisoner’s Base (4, f). Rex Stout.

70. The Golden Spiders (3, f). Rex Stout.

71. Three Men Out (3, f). Rex Stout.

72. The Black Mountain (4, f). Rex Stout. Short goodreads review here.

73. Beyond Significance Testing: Statistics Reform in the Behavioral Sciences (4, nf. American Psychological Association). Blog coverage here, here, here, here, and here.

74. Before Midnight (3, f). Rex Stout.

75. How Species Interact: Altering the Standard View on Trophic Ecology (4, nf. Oxford University Press). Blog coverage here.

76. Three Witnesses (4, f). Rex Stout.

77. Might As Well Be Dead (4, f). Rex Stout.

78. Gravity: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

79. Three for the Chair (3, f). Rex Stout.

80. If Death Ever Slept (3, f). Rex Stout.

81. And Four to Go (3, f). Rex Stout.

82. Champagne for One (4, f). Rex Stout.

83. Plot It Yourself (5, f). Rex Stout. Short goodreads review here.

84. Three At Wolfe’s Door (3, f). Rex Stout.

85. Too Many Clients (4, f). Rex Stout.

86. First Farmers: The Origins of Agricultural Societies (5, nf. Blackwell Publishing). I added this book to my list of favorite books on goodreads. Blog coverage here.

87. The Final Deduction (4, f). Rex Stout.

88. Homicide Trinity (4, f). Rex Stout.

89. Gambit (5, f). Rex Stout. Very short goodreads review here.

90. The Mother Hunt (3, f). Rex Stout.

91. Trio for Blunt Instruments (3, f). Rex Stout.

92. A Right to Die (2, f). Rex Stout.

93. Concepts and Methods in Infectious Disease Surveillance (2, nf. Wiley-Blackwell). Blog coverage here, here, here, and here.

94. The Doorbell Rang (5, f). Rex Stout.

95. Death of a Doxy (4, f). Rex Stout.

96. The Father Hunt (3, f). Rex Stout. Short goodreads review here.

97. Death of a Dude (3, f). Rex Stout.

98. Gastrointestinal Function in Diabetes Mellitus (5, nf. John Wiley & Sons). Short goodreads review here. I added this book to my list of favorite books on goodreads. This post included a few observations from the book. I also covered the book here, here, and here.

99. Please Pass the Guilt (2, f). Rex Stout.

100. Depression and Heart Disease (4, nf. John Wiley & Sons). Blog coverage here and here.

101. A Family Affair (4, f). Rex Stout.

102. The Sound of Murder (2, f). Rex Stout.

103. The Broken Vase (f). Rex Stout. (Forgot to add/rate this one on goodreads shortly after I’d read it and I only noticed that I’d forgot to add the book much later – so I decided not to rate it).

104. Flashman and the Angel of the Lord (4, f). George MacDonald Fraser.

105. The Collapse of Complex Societies (1, nf. Cambridge University Press).

106. Flashman and the Dragon (5, f). George MacDonald Fraser.

107. Magnetism: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

108. Flashman on the March (3, f). George MacDonald Fraser.

109. Flashman and the Tiger (3, f). George MacDonald Fraser. Goodreads review here.

110. Light: A Very Short Introduction (2. nf. Oxford University Press). Blog coverage here.

111. Double for Death (3, f). Rex Stout. Goodreads review here.

112. Red Threads (2, f). Rex Stout.

113. The Fall of Rome And the End of Civilization (5, nf. Oxford University Press). Blog coverage here.

114. Sound: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

115. The Mountain Cat Murders (2, f). Rex Stout.

116. Storm Front (4, f). Jim Butcher.

117. Fool Moon (3, f) Jim Butcher.

118. Grave Peril (4, f). Jim Butcher.

119. Summer Knight (4, f). Jim Butcher.

120. The History of Astronomy: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

121. Death Masks (4, f). Jim Butcher.

122. Blood Rites (4, f). Jim Butcher.

123. Dead Beat (4, f). Jim Butcher.

124. Proven Guilty (5, f). Jim Butcher.

125. Earth System Science: A Very Short Introduction (nf. Oxford University Press). Blog coverage here.

126. White Night (3, f). Jim Butcher.

127. Small Favor (3, f). Jim Butcher.

128. Turn Coat (4, f). Jim Butcher.

129. Physical Chemistry: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.

130. Changes (3, f). Jim Butcher.

131. Ghost Story (4, f). Jim Butcher. Very short goodreads review here.

132. Cold Days (4, f). Jim Butcher.

133. Child Psychology: A Very Short Introduction (1, nf. Oxford University Press). Very short goodreads review here. Blog coverage here.

134. Skin Game (3, f). Jim Butcher.

135. Animal Farm (3, f). George Orwell. Goodreads review here.

136. Bellwether (3, f). Connie Willis.

137. Enter the Saint (1, f). Leslie Charteris.

138. Organic Chemistry: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here and here.

139. The Shadow of the Torturer (2, f). Gene Wolfe.

140. Molecules: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

141. The Claw of the Conciliator (1, f). Gene Wolfe. Goodreads review here.

142. Common Errors in Statistics (4, nf. John Wiley & Sons). Blog coverage here, here, and here.

143. Master and Commander (3, f). Patrick O’Brian.

144. Materials: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here and here.

145. Post Captain (3, f). Patrick O’Brian.

146. Isotopes: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here.

147. Radioactivity: A Very Short Introduction (2, nf. Oxford University Press). Short goodreads review here. Blog coverage here.

148. Current Topics in Occupational Epidemiology (4, nf. Oxford University Press). Short goodreads review here. Blog coverage here, here, and here.

149. HMS Surprise (3, f). Patrick O’Brian.

150. Never Let Me Go (5, f). Kazuo Ishiguro. I added this book to my list of favorite books on goodreads.

151. Nuclear Power: A Very Short Introduction (2, nf. Oxford University Press). Short goodreads review here. Blog coverage here and here.

152. Nuclear Weapons: A Very Short Introduction (1, nf. Oxford University Press). Goodreads review here.

153. Assassin’s Apprentice (4, f). Robin Hobb.

154. Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland (4, nf. Harper Perennial).

155. Plate Tectonics: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here and here.

156. The Mauritius Command (2, f). Patrick O’Brian.

157. Royal Assassin (3, f). Robin Hobb.

158. The Periodic Table: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

159. Civil Engineering: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here and here.

160. Depression: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here and here.

161. Autism: A Very Short Introduction (nf. Oxford University Press). Goodreads review here.

162. Legends (f). Robert Silverberg (editor). Goodreads review here.

January 1, 2018 Posted by | Books, Personal | Leave a comment

Random stuff

I have almost stopped posting posts like these, which has resulted in the accumulation of a very large number of links and studies which I figured I might like to blog at some point. This post is mainly an attempt to deal with the backlog – I won’t cover the material in too much detail.

i. Do Bullies Have More Sex? The answer seems to be a qualified yes. A few quotes:

“Sexual behavior during adolescence is fairly widespread in Western cultures (Zimmer-Gembeck and Helfland 2008) with nearly two thirds of youth having had sexual intercourse by the age of 19 (Finer and Philbin 2013). […] Bullying behavior may aid in intrasexual competition and intersexual selection as a strategy when competing for mates. In line with this contention, bullying has been linked to having a higher number of dating and sexual partners (Dane et al. 2017; Volk et al. 2015). This may be one reason why adolescence coincides with a peak in antisocial or aggressive behaviors, such as bullying (Volk et al. 2006). However, not all adolescents benefit from bullying. Instead, bullying may only benefit adolescents with certain personality traits who are willing and able to leverage bullying as a strategy for engaging in sexual behavior with opposite-sex peers. Therefore, we used two independent cross-sectional samples of older and younger adolescents to determine which personality traits, if any, are associated with leveraging bullying into opportunities for sexual behavior.”

“…bullying by males signal the ability to provide good genes, material resources, and protect offspring (Buss and Shackelford 1997; Volk et al. 2012) because bullying others is a way of displaying attractive qualities such as strength and dominance (Gallup et al. 2007; Reijntjes et al. 2013). As a result, this makes bullies attractive sexual partners to opposite-sex peers while simultaneously suppressing the sexual success of same-sex rivals (Gallup et al. 2011; Koh and Wong 2015; Zimmer-Gembeck et al. 2001). Females may denigrate other females, targeting their appearance and sexual promiscuity (Leenaars et al. 2008; Vaillancourt 2013), which are two qualities relating to male mate preferences. Consequently, derogating these qualities lowers a rivals’ appeal as a mate and also intimidates or coerces rivals into withdrawing from intrasexual competition (Campbell 2013; Dane et al. 2017; Fisher and Cox 2009; Vaillancourt 2013). Thus, males may use direct forms of bullying (e.g., physical, verbal) to facilitate intersexual selection (i.e., appear attractive to females), while females may use relational bullying to facilitate intrasexual competition, by making rivals appear less attractive to males.”

The study relies on the use of self-report data, which I find very problematic – so I won’t go into the results here. I’m not quite clear on how those studies mentioned in the discussion ‘have found self-report data [to be] valid under conditions of confidentiality’ – and I remain skeptical. You’ll usually want data from independent observers (e.g. teacher or peer observations) when analyzing these kinds of things. Note in the context of the self-report data problem that if there’s a strong stigma associated with being bullied (there often is, or bullying wouldn’t work as well), asking people if they have been bullied is not much better than asking people if they’re bullying others.

ii. Some topical advice that some people might soon regret not having followed, from the wonderful Things I Learn From My Patients thread:

“If you are a teenage boy experimenting with fireworks, do not empty the gunpowder from a dozen fireworks and try to mix it in your mother’s blender. But if you do decide to do that, don’t hold the lid down with your other hand and stand right over it. This will result in the traumatic amputation of several fingers, burned and skinned forearms, glass shrapnel in your face, and a couple of badly scratched corneas as a start. You will spend months in rehab and never be able to use your left hand again.”

iii. I haven’t talked about the AlphaZero-Stockfish match, but I was of course aware of it and did read a bit about that stuff. Here’s a reddit thread where one of the Stockfish programmers answers questions about the match. A few quotes:

“Which of the two is stronger under ideal conditions is, to me, neither particularly interesting (they are so different that it’s kind of like comparing the maximum speeds of a fish and a bird) nor particularly important (since there is only one of them that you and I can download and run anyway). What is super interesting is that we have two such radically different ways to create a computer chess playing entity with superhuman abilities. […] I don’t think there is anything to learn from AlphaZero that is applicable to Stockfish. They are just too different, you can’t transfer ideas from one to the other.”

“Based on the 100 games played, AlphaZero seems to be about 100 Elo points stronger under the conditions they used. The current development version of Stockfish is something like 40 Elo points stronger than the version used in Google’s experiment. There is a version of Stockfish translated to hand-written x86-64 assembly language that’s about 15 Elo points stronger still. This adds up to roughly half the Elo difference between AlphaZero and Stockfish shown in Google’s experiment.”

“It seems that Stockfish was playing with only 1 GB for transposition tables (the area of memory used to store data about the positions previously encountered in the search), which is way too little when running with 64 threads.” [I seem to recall a comp sci guy observing elsewhere that this was less than what was available to his smartphone version of Stockfish, but I didn’t bookmark that comment].

“The time control was a very artificial fixed 1 minute/move. That’s not how chess is traditionally played. Quite a lot of effort has gone into Stockfish’s time management. It’s pretty good at deciding when to move quickly, and when to spend a lot of time on a critical decision. In a fixed time per move game, it will often happen that the engine discovers that there is a problem with the move it wants to play just before the time is out. In a regular time control, it would then spend extra time analysing all alternative moves and trying to find a better one. When you force it to move after exactly one minute, it will play the move it already know is bad. There is no doubt that this will cause it to lose many games it would otherwise have drawn.”

iv. Thrombolytics for Acute Ischemic Stroke – no benefit found.

“Thrombolysis has been rigorously studied in >60,000 patients for acute thrombotic myocardial infarction, and is proven to reduce mortality. It is theorized that thrombolysis may similarly benefit ischemic stroke patients, though a much smaller number (8120) has been studied in relevant, large scale, high quality trials thus far. […] There are 12 such trials 1-12. Despite the temptation to pool these data the studies are clinically heterogeneous. […] Data from multiple trials must be clinically and statistically homogenous to be validly pooled.14 Large thrombolytic studies demonstrate wide variations in anatomic stroke regions, small- versus large-vessel occlusion, clinical severity, age, vital sign parameters, stroke scale scores, and times of administration. […] Examining each study individually is therefore, in our opinion, both more valid and more instructive. […] Two of twelve studies suggest a benefit […] In comparison, twice as many studies showed harm and these were stopped early. This early stoppage means that the number of subjects in studies demonstrating harm would have included over 2400 subjects based on originally intended enrollments. Pooled analyses are therefore missing these phantom data, which would have further eroded any aggregate benefits. In their absence, any pooled analysis is biased toward benefit. Despite this, there remain five times as many trials showing harm or no benefit (n=10) as those concluding benefit (n=2), and 6675 subjects in trials demonstrating no benefit compared to 1445 subjects in trials concluding benefit.”

“Thrombolytics for ischemic stroke may be harmful or beneficial. The answer remains elusive. We struggled therefore, debating between a ‘yellow’ or ‘red’ light for our recommendation. However, over 60,000 subjects in trials of thrombolytics for coronary thrombosis suggest a consistent beneficial effect across groups and subgroups, with no studies suggesting harm. This consistency was found despite a very small mortality benefit (2.5%), and a very narrow therapeutic window (1% major bleeding). In comparison, the variation in trial results of thrombolytics for stroke and the daunting but consistent adverse effect rate caused by ICH suggested to us that thrombolytics are dangerous unless further study exonerates their use.”

“There is a Cochrane review that pooled estimates of effect. 17 We do not endorse this choice because of clinical heterogeneity. However, we present the NNT’s from the pooled analysis for the reader’s benefit. The Cochrane review suggested a 6% reduction in disability […] with thrombolytics. This would mean that 17 were treated for every 1 avoiding an unfavorable outcome. The review also noted a 1% increase in mortality (1 in 100 patients die because of thrombolytics) and a 5% increase in nonfatal intracranial hemorrhage (1 in 20), for a total of 6% harmed (1 in 17 suffers death or brain hemorrhage).”

v. Suicide attempts in Asperger Syndrome. An interesting finding: “Over 35% of individuals with AS reported that they had attempted suicide in the past.”

Related: Suicidal ideation and suicide plans or attempts in adults with Asperger’s syndrome attending a specialist diagnostic clinic: a clinical cohort study.

“374 adults (256 men and 118 women) were diagnosed with Asperger’s syndrome in the study period. 243 (66%) of 367 respondents self-reported suicidal ideation, 127 (35%) of 365 respondents self-reported plans or attempts at suicide, and 116 (31%) of 368 respondents self-reported depression. Adults with Asperger’s syndrome were significantly more likely to report lifetime experience of suicidal ideation than were individuals from a general UK population sample (odds ratio 9·6 [95% CI 7·6–11·9], p<0·0001), people with one, two, or more medical illnesses (p<0·0001), or people with psychotic illness (p=0·019). […] Lifetime experience of depression (p=0·787), suicidal ideation (p=0·164), and suicide plans or attempts (p=0·06) did not differ significantly between men and women […] Individuals who reported suicide plans or attempts had significantly higher Autism Spectrum Quotient scores than those who did not […] Empathy Quotient scores and ages did not differ between individuals who did or did not report suicide plans or attempts (table 4). Patients with self-reported depression or suicidal ideation did not have significantly higher Autism Spectrum Quotient scores, Empathy Quotient scores, or age than did those without depression or suicidal ideation”.

The fact that people with Asperger’s are more likely to be depressed and contemplate suicide is consistent with previous observations that they’re also more likely to die from suicide – for example a paper I blogged a while back found that in that particular (large Swedish population-based cohort-) study, people with ASD were more than 7 times as likely to die from suicide than were the comparable controls.

Also related: Suicidal tendencies hard to spot in some people with autism.

This link has some great graphs and tables of suicide data from the US.

Also autism-related: Increased perception of loudness in autism. This is one of the ‘important ones’ for me personally – I am much more sound-sensitive than are most people.

vi. Early versus Delayed Invasive Intervention in Acute Coronary Syndromes.

“Earlier trials have shown that a routine invasive strategy improves outcomes in patients with acute coronary syndromes without ST-segment elevation. However, the optimal timing of such intervention remains uncertain. […] We randomly assigned 3031 patients with acute coronary syndromes to undergo either routine early intervention (coronary angiography ≤24 hours after randomization) or delayed intervention (coronary angiography ≥36 hours after randomization). The primary outcome was a composite of death, myocardial infarction, or stroke at 6 months. A prespecified secondary outcome was death, myocardial infarction, or refractory ischemia at 6 months. […] Early intervention did not differ greatly from delayed intervention in preventing the primary outcome, but it did reduce the rate of the composite secondary outcome of death, myocardial infarction, or refractory ischemia and was superior to delayed intervention in high-risk patients.”

vii. Some wikipedia links:

Behrens–Fisher problem.
Sailing ship tactics (I figured I had to read up on this if I were to get anything out of the Aubrey-Maturin books).
Anatomical terms of muscle.
Phatic expression (“a phatic expression […] is communication which serves a social function such as small talk and social pleasantries that don’t seek or offer any information of value.”)
Three-domain system.
Beringian wolf (featured).
Subdural hygroma.
Cayley graph.
Schur polynomial.
Solar neutrino problem.
Hadamard product (matrices).
True polar wander.
Newton’s cradle.

viii. Determinant versus permanent (mathematics – technical).

ix. Some years ago I wrote a few English-language posts about some of the various statistical/demographic properties of immigrants living in Denmark, based on numbers included in a publication by Statistics Denmark. I did it by translating the observations included in that publication, which was only published in Danish. I was briefly considering doing the same thing again when the 2017 data arrived, but I decided not to do it as I recalled that it took a lot of time to write those posts back then, and it didn’t seem to me to be worth the effort – but Danish readers might be interested to have a look at the data, if they haven’t already – here’s a link to the publication Indvandrere i Danmark 2017.

x. A banter blitz session with grandmaster Peter Svidler, who recently became the first Russian ever to win the Russian Chess Championship 8 times. He’s currently shared-second in the World Rapid Championship after 10 rounds and is now in the top 10 on the live rating list in both classical and rapid – seems like he’s had a very decent year.

xi. I recently discovered Dr. Whitecoat’s blog. The patient encounters are often interesting.

December 28, 2017 Posted by | Astronomy, autism, Biology, Cardiology, Chess, Computer science, History, Mathematics, Medicine, Neurology, Physics, Psychiatry, Psychology, Random stuff, Statistics, Studies, Wikipedia, Zoology | Leave a comment

Words

It’s been a while since I posted one of these posts.

I know for certain that quite a few of the words included below are words which I encountered while reading the Jim Butcher books Ghost Story, Cold Days, and Skin Game, and I also know that some of the ones I added to the post more recently were words I encountered while reading the Oxford Handbook of Endocrinology and Diabetes. Almost half of the words were however words which had just been added at some point in the past to a list I keep of words I’d like to eventually include in posts like these; that list had grown rather long and unwieldy so I decided to include a lot of words from that list in this post – I have almost no idea where I encountered most of those words (I try to add to that list whenever I encounter a word I particularly like or a word with which I’m not familiar, regardless of the source, and I usually do not add a source).

Chemosis. Asthenia. Arcuate. Onycholysis. Nubble. Colliery. Fomite. Riparian. Guglet/goglet. Limbus. Stupe. Osier. Synostosis. Amscray. Slosh. Dowel. Swill. Tocometer. Raster. Squab.

Antiquer. Ritzy. Boutonniere. Exfiltrate. Lurch. Placard. Futz. Bleary. Scapula. Bobble. Frigorific. Skerry. Trotter. Raffinate. Truss. Despoliation. Primogeniture. Whelp. Ethmoid. Rollick.

Fireplug. Taupe. Obviate. Koi. Doughboy. Guck. Flophouse. Vane. Gast. Chastisement. Rink. Wakizashi. Culvert. Lickety-split. Whipsaw. Spall. Tine. Nadir. Periwinkle. Pitter-patter.

Sidle. Iridescent. Feint. Flamberge. Batten. Gangplank. Meander. Flunky. Futz. Thwack. Prissy. Vambrace. Tasse. Smarmy. Abut. Jounce. Wright. Ebon. Skin game. Shimmer.

December 27, 2017 Posted by | Books, Language | Leave a comment

Plate Tectonics (II)

Some more observations and links below.

I may or may not add a third post about the book at a later point in time; there’s a lot of interesting stuff included in this book.

“Because of the thickness of the lithosphere, its bending causes […] a stretching of its upper surface. This stretching of the upper portion of the lithosphere manifests itself as earthquakes and normal faulting, the style of faulting that occurs when a region extends horizontally […]. Such earthquakes commonly occur after great earthquakes […] Having been bent down at the trench, the lithosphere […] slides beneath the overriding lithospheric plate. Fault plane solutions of shallow focus earthquakes […] provide the most direct evidence for this underthrusting. […] In great earthquakes, […] the deformation of the surface of the Earth that occurs during such earthquakes corroborates the evidence for underthrusting of the oceanic lithosphere beneath the landward side of the trench. The 1964 Alaskan earthquake provided the first clear example. […] Because the lithosphere is much colder than the asthenosphere, when a plate of lithosphere plunges into the asthenosphere at rates of tens to more than a hundred millimetres per year, it remains colder than the asthenosphere for tens of millions of years. In the asthenosphere, temperatures approach those at which some minerals in the rock can melt. Because seismic waves travel more slowly and attenuate (lose energy) more rapidly in hot, and especially in partially molten, rock than they do in colder rock, the asthenosphere is not only a zone of weakness, but also characterized by low speeds and high attenuation of seismic waves. […] many seismologists use the waves sent by earthquakes to study the Earth’s interior, with little regard for earthquakes themselves. The speeds at which these waves propagate and the rate at which the waves die out, or attenuate, have provided much of the data used to infer the Earth’s internal structure.”

S waves especially, but also P waves, lose much of their energy while passing through the asthenosphere. The lithosphere, however, transmits P and S waves with only modest loss of energy. This difference is apparent in the extent to which small earthquakes can be felt. In regions like the western United States or in Greece and Italy, the lithosphere is thin, and the asthenosphere reaches up to shallow depths. As a result earthquakes, especially small ones, are felt over relatively small areas. By contrast, in the eastern United States or in Eastern Europe, small earthquakes can be felt at large distances. […] Deep earthquakes occur several hundred kilometres west of Japan, but they are felt with greater intensity and can be more destructive in eastern than western Japan […]. This observation, of course, puzzled Japanese seismologists when they first discovered deep focus earthquakes; usually people close to the epicentre (the point directly over the earthquake) feel stronger shaking than people farther from it. […] Tokuji Utsu […] explained this greater intensity of shaking along the more distant, eastern side of the islands than on the closer, western side by appealing to a window of low attenuation parallel to the earthquake zone and plunging through the asthenosphere beneath Japan and the Sea of Japan to its west. Paths to eastern Japan travelled efficiently through that window, the subducted slab of lithosphere, whereas those to western Japan passed through the asthenosphere and were attenuated strongly.”

“Shallow earthquakes occur because stress on a fault surface exceeds the resistance to slip that friction imposes. When two objects are forced to slide past one another, and friction opposes the force that pushes one past the other, the frictional resistance can be increased by pressing the two objects together more forcefully. Many of us experience this when we put sandbags in the trunks […] of our cars in winter to give the tyres greater traction on slippery roads. The same applies to faults in the Earth’s crust. As the pressure increases with increasing depth in the Earth, frictional resistance to slip on faults should increase. For depths greater than a few tens of kilometres, the high pressure should press the two sides of a fault together so tightly that slip cannot occur. Thus, in theory, deep-focus earthquakes ought not to occur.”

“In general, rock […] is brittle at low temperatures but becomes soft and flows at high temperature. The intermediate- and deep-focus earthquakes occur within the lithosphere, where at a given depth, the temperature is atypically low. […] the existence of intermediate- or deep-focus earthquakes is usually cited as evidence for atypically cold material at asthenospheric depths. Most such earthquakes, therefore, occur in oceanic lithosphere that has been subducted within the last 10–20 million years, sufficiently recently that it has not heated up enough to become soft and weak […]. The inference that the intermediate- and deep-focus earthquakes occur within the lithosphere and not along its top edge remains poorly appreciated among Earth scientists. […] the fault plane solutions suggest that the state of stress in the downgoing slab is what one would expect if the slab deformed like a board, or slab of wood. Accordingly, we infer that the earthquakes occurring within the downgoing slab of lithosphere result from stress within the slab, not from movement of the slab past the surrounding asthenosphere. Because the lithosphere is much stronger than the surrounding asthenosphere, it can support much higher stresses than the asthenosphere can. […] observations are consistent with a cold, heavy slab sinking into the asthenosphere and being pulled downward by gravity acting on it, but then encountering resistance at depths of 500–700 km despite the pull of gravity acting on the excess mass of the slab. Where both intermediate and deep-focus earthquakes occur, a gap, or a minimum, in earthquake activity near a depth of 300 km marks the transition between the upper part of the slab stretched by gravity pulling it down and the lower part where the weight of the slab above it compresses it. In the transition region between them, there would be negligible stress and, therefore, no or few earthquakes.”

“Volcanoes occur where rock melts, and where that molten rock can rise to the surface. […] For essentially all minerals […] melting temperatures […] depend on the extent to which the minerals have been contaminated by impurities. […] hydrogen, when it enters most crystal lattices, lowers the melting temperature of the mineral. Hydrogen is most obviously present in water (H2O), but is hardly a major constituent of the oxygen-, silicon-, magnesium-, and iron-rich mantle. The top of the downgoing slab of lithosphere includes fractured crust and sediment deposited atop it. Oceanic crust has been stewing in seawater for tens of millions of years, so that its cracks have become full either of liquid water or of minerals to which water molecules have become loosely bound. […] the downgoing slab acts like a caravan of camels carrying water downward into an upper mantle desert. […] The downgoing slab of lithosphere carries water in cracks in oceanic crust and in the interstices among sediment grains, and when released to the mantle above it, hydrogen dissolved in crystal lattices lowers the melting temperature of that rock enough that some of it melts. Many of the world’s great volcanoes […] begin as small amounts of melt above the subducted slabs of lithosphere.”

“… (in most regions) plates of lithosphere behave as rigid, and therefore undeformable, objects. The high strength of intact lithosphere, stronger than either the asthenosphere below it or the material along the boundaries of plates, allows the lithospheric plates to move with respect to one another without deforming (much). […] The essence of ‘plate tectonics’ is that vast regions move with respect to one another as (nearly) rigid objects. […] Dan McKenzie of Cambridge University, one of the scientists to present the idea of rigid plates, often argued that plate tectonics was easy to accept because the kinematics, the description of relative movements of plates, could be separated from the dynamics, the system of forces that causes plates to move with respect to one another in the directions and at the speeds that they do. Making such a separation is impossible for the flow of most fluids, […] whose movement cannot be predicted without an understanding of the forces acting on separate parcels of fluid. In part because of its simplicity, plate tectonics passed from being a hypothesis to an accepted theory in a short time.”

“[F]or plates that move over the surface of a sphere, all relative motion can be described simply as a rotation about an axis that passes through the centre of the sphere. The Earth itself obviously rotates around an axis through the North and South Poles. Similarly, the relative displacement of two plates with respect to one another can be described as a rotation of one plate with respect to the other about an axis, or ‘pole’, of rotation […] if we know how two plates, for example Eurasia and Africa, move with respect to a third plate, like North America, we can calculate how those two plates (Eurasia and Africa) move with respect to each other. A rotation about an axis in the Arctic Ocean describes the movement of the Africa plate, with respect to the North America plate […]. Combining the relative motion of Africa with respect to North America with the relative motion of North America with respect to Eurasia allows us to calculate that the African continent moves toward Eurasia by a rotation about an axis that lies west of northern Africa. […] By combining the known relative motion of pairs of plates […] we can calculate how fast plates converge with respect to one another and in what direction.”

“[W]e can measure how plates move with respect to one another using Global Positioning System (GPS) measurements of points on nearly all of the plates. Such measurements show that speeds of relative motion between some pairs of plates have changed a little bit since 2 million years ago, but in general, the GPS measurements corroborate the inferences drawn both from rates of seafloor spreading determined using magnetic anomalies and from directions of relative plate motion determined using orientations of transform faults and fault plane solutions of earthquakes. […] Among tests of plate tectonics, none is more convincing than the GPS measurements […] numerous predictions of rates or directions of present-day plate motions and of large displacements of huge terrains have been confirmed many times over. […] When, more than 45 years ago, plate tectonics was proposed to describe relative motions of vast terrains, most saw it as an approximation that worked well, but that surely was imperfect. […] plate tectonics is imperfect, but GPS measurements show that the plates are surprisingly rigid. […] Long histories of plate motion can be reduced to relatively few numbers, the latitudes and longitudes of the poles of rotation, and the rates or amounts of rotation about those axes.”

Links:

Wadati–Benioff zone.
Translation (geometry).
Rotation (mathematics).
Poles of rotation.
Rotation around a fixed axis.
Euler’s rotation theorem.
Isochron dating.
Tanya Atwater.

December 25, 2017 Posted by | Books, Chemistry, Geology, Physics | Leave a comment

Plate Tectonics (I)

Some quotes and links related to the first half of the book‘s coverage:

“The fundamental principle of plate tectonics is that large expanses of terrain, thousands of kilometres in lateral extent, behave as thin (~100 km in thickness) rigid layers that move with respect to each another across the surface of the Earth. The word ‘plate’ carries the image of a thin rigid object, and ‘tectonics’ is a geological term that refers to large-scale processes that alter the structure of the Earth’s crust. […] The Earth is stratified with a light crust overlying denser mantle. Just as the height of icebergs depends on the mass of ice below the surface of the ocean, so […] the light crust of the Earth floats on the denser mantle, standing high where crust is thick, and lying low, deep below the ocean, where it should be thin. Wegener recognized that oceans are mostly deep, and he surmised correctly that the crust beneath oceans must be much thinner than that beneath continents.”

“From a measurement of the direction in which a hunk of rock is magnetized, one can infer where the North Pole lay relative to that rock at the time it was magnetized. It follows that if continents had drifted, rock of different ages on the continents should be magnetized in different directions, not just from each other but more importantly in directions inconsistent with the present-day magnetic field. […] In the 1950s, several studies using palaeomagnetism were carried out to test whether continents had drifted, and most such tests passed. […] Palaeomagnetic results not only supported the idea of continental drift, but they also offered constraints on timing and rates of drift […] in the 1960s, the idea of continental drift saw a renaissance, but subsumed within a broader framework, that of plate tectonics.”

“If one wants to study deformation of the Earth’s crust in action, the quick and dirty way is to study earthquakes. […] Until the 1960s, studying fracture zones in action was virtually impossible. Nearly all of them lie far offshore beneath the deep ocean. Then, in response to a treaty in the early 1960s disallowing nuclear explosions in the ocean, atmosphere, or space, but permitting underground testing of them, the Department of Defense of the USA put in place the World-Wide Standardized Seismograph Network, a global network with more than 100 seismograph stations. […] Suddenly remote earthquakes, not only those on fracture zones but also those elsewhere throughout the globe […], became amenable to study. […] the study of earthquakes played a crucial role in the recognition and acceptance of plate tectonics. […] By the early 1970s, the basic elements of plate tectonics had permeated essentially all of Earth science. In addition to the obvious consequences, like confirmation of continental drift, emphasis shifted from determining the history of the planet to understanding the processes that had shaped it.”

“[M]ost solids are strongest when cold, and become weaker when warmed. Temperature increases into the Earth. As a result the strongest rock lies close to the surface, and rock weakens with depth. Moreover, olivine, the dominant mineral in the upper mantle, seems to be stronger than most crustal minerals; so, in many regions, the strongest rock is at the top of the mantle. Beneath oceans where crust is thin, ~7 km, the lithosphere is mostly mantle […]. Because temperature increases gradually with depth, the boundary between strong lithosphere and underlying weak asthenosphere is not sharp. Nevertheless, because the difference in strength is large, subdividing the outer part of the Earth into two layers facilitates an understanding of plate tectonics. Reduced to its essence, the basic idea that we call plate tectonics is simply a description of the relative movements of separate plates of lithosphere as these plates move over the underlying weaker, hotter asthenosphere. […] Most of the Earth’s surface lies on one of the ~20 major plates, whose sizes vary from huge, like the Pacific plate, to small, like the Caribbean plate […], or even smaller. Narrow belts of earthquakes mark the boundaries of separate plates […]. The key to plate tectonics lies in these plates behaving as largely rigid objects, and therefore undergoing only negligible deformation.”

“Although the amounts and types of sediment deposited on the ocean bottom vary from place to place, the composition and structure of the oceanic crust is remarkably uniform beneath the deep ocean. The structure of oceanic lithosphere depends primarily on its age […] As the lithosphere ages, it thickens, and the rate at which it cools decreases. […] the rate that heat is lost through the seafloor decreases with the age of lithosphere. […] As the lithospheric plate loses heat and cools, like most solids, it contracts. This contraction manifests itself as a deepening of the ocean. […] Seafloor spreading in the Pacific occurs two to five times faster than it does in the Atlantic. […] when seafloor spreading is slow, new basalt rising to the surface at the ridge axis can freeze onto the older seafloor on its edges before rising as high as it would otherwise. As a result, a valley […] forms. Where spreading is faster, however, as in the Pacific, new basalt rises to a shallower depth and no such valley forms. […] The spreading apart of two plates along a mid-ocean ridge system occurs by divergence of the two plates along straight segments of mid-ocean ridge that are truncated at fracture zones. Thus, the plate boundary at a mid-ocean ridge has a zig-zag shape, with spreading centres making zigs and transform faults making zags along it.”

“Geochemists are confident that the volume of water in the oceans has not changed by a measurable amount for hundreds of millions, if not billions, of years. Yet, the geologic record shows several periods when continents were flooded to a much greater extent than today. For example, 90 million years ago, the Midwestern United States and neighbouring Canada were flooded. One could have sailed due north from the Gulf of Mexico to Hudson’s Bay and into the Arctic. […] If sea level has risen and fallen, while the volume of water has remained unchanged, then the volume of the basin holding the water must have changed. The rates at which seafloor is created at the different spreading centres today are not the same, and such rates at all spreading centres have varied over geologic time. Imagine a time in the past when seafloor at some of the spreading centres was created at a faster rate than it is today. If this relatively high rate had continued for a few tens of millions of years, there would have been more young ocean floor than today, and correspondingly less old floor […]. Thus, the average depth of the ocean would be shallower than it is today, and the volume of the ocean basin would be smaller than today. Water should have spilled onto the continent. Most now attribute the high sea level in the Cretaceous Period (145 to 65 million years ago) to unusually rapid creation of seafloor, and hence to a state when seafloor was younger on average than today.”

Wilson focused on the two major differences between ordinary strike-slip faults, or transcurrent faults, and transform faults on fracture zones. (1) If transcurrent faulting occurred, slip should occur along the entire fracture zone; but for transform faulting, only the portion between the segments of spreading centres would be active. (2) The sense of slip on the faults would be opposite for these two cases: if right-lateral for one, then left-lateral for the other […] The occurrences of earthquakes along a fault provide the most convincing evidence that the fault is active. Slip on most faults and most deformation of the Earth’s crust to make mountains occurs not slowly and steadily on human timescales, but abruptly during earthquakes. Accordingly, a map of earthquakes is, to a first approximation, a map of active faults on which regions, such as lithospheric plates, slide past one another […] When an earthquake occurs, slip on a fault takes place. One side of the fault slides past the other so that slip is parallel to the plane of the fault; the opening of cracks, into which cows or people can fall, is rare and atypical. Repeated studies of earthquakes and the surface ruptures accompanying them show that the slip during an earthquake is representative of the sense of cumulative displacement that has occurred on faults over geologic timescales. Thus earthquakes give us snapshots of processes that occur over thousands to millions of years. Two aspects of a fault define it: the orientation of the fault plane, which can be vertical or gently dipping, and the sense of slip: the direction that one side of the fault moves with respect to the other […] To a first approximation, boundaries between plates are single faults. Thus, if we can determine both the orientation of the fault plane and the sense of slip on it during an earthquake, we can infer the direction that one plate moves with respect to the other. Often during earthquakes, but not always, slip on the fault offsets the Earth’s surface, and we can directly observe the sense of motion […]. In the deep ocean, however, this cannot be done as a general practice, and we must rely on more indirect methods.”

“Because seafloor spreading creates new seafloor at the mid-ocean ridges, the newly formed crust must find accommodation: either the Earth must expand or lithosphere must be destroyed at the same rate that it is created. […] for the Earth not to expand (or contract), the sum total of new lithosphere made at spreading centres must be matched by the removal, by subduction, of an equal amount of lithosphere at island arc structures. […] Abundant evidence […] shows that subduction of lithosphere does occur. […] The subduction process […] differs fundamentally from that of seafloor spreading, in that subduction is asymmetric. Whereas two plates are created and grow larger at equal rates at spreading centers (mid-ocean ridges and rises), the areal extent of only one plate decreases at a subduction zone. The reason for this asymmetry derives from the marked dependence of the strength of rock on temperature. […] At spreading centres, hot weak rock deforms easily as it rises at mid-ocean ridges, cools, and then becomes attached to one of the two diverging plates. At subduction zones, however, cold and therefore strong lithosphere resists bending and contortion. […] two plates of lithosphere, each some 100 km thick, cannot simply approach one another, turn sharp corners […], and dive steeply into the asthenosphere. Much less energy is dissipated if one plate undergoes modest flexure and then slides at a gentle angle beneath the other, than if both plates were to undergo pronounced bending and then plunged together steeply into the asthenosphere. Nature takes the easier, energetically more efficient, process. […] Before it plunges beneath the island arc, the subducting plate of lithosphere bends down gently to cause a deep-sea trench […] As the plate bends down to form the trench, the lithosphere seaward of the trench is flexed upwards slightly. […] the outer topographic rise […] will be lower but wider for thicker lithosphere.”

Plate tectonics.
Andrija Mohorovičić. Mohorovičić discontinuity.
Archimedes’ principle.
Isostasy.
Harold Jeffreys. Keith Edward Bullen. Edward A. Irving. Harry Hammond Hess. Henry William Menard. Maurice Ewing.
Paleomagnetism.
Lithosphere. Asthenosphere.
Mid-ocean ridge. Bathymetry. Mid-Atlantic Ridge. East Pacific Rise. Seafloor spreading.
Fracture zone. Strike-slip fault. San Andreas Fault.
World-Wide Standardized Seismograph Network (USGS).
Vine–Matthews–Morley hypothesis.
Geomagnetic reversal. Proton precession magnetometer. Jaramillo (normal) event.
Potassium–argon dating.
Deep Sea Drilling Project.
“McKenzie Equations” for magma migration.
Transform fault.
Mendocino Fracture Zone.
Subduction.
P-wave. S-wave. Fault-plane solution. Compressional waves.
Triple junction.

December 23, 2017 Posted by | Books, Geology, Physics | Leave a comment

Quotes

i. “Culture hides more than it reveals, and strangely enough what it hides, it hides most effectively from its own participants.” (Edward T. Hall)

ii. “A new idea becomes believable when it predicts something that has not yet been measured or explained, especially when the idea is really trying to explain other facts.” (Peter Molnar, Plate Tectonics: A Short Introduction)

iii. “…when something seems complicated, we do not understand it, but when we do understand something, it has become simple.” (-ll-)

iv. “Ninety percent of the politicians give the other ten percent a bad reputation.” (Henry Kissinger)

v. “There are many examples of old, incorrect theories that stubbornly persisted, sustained only by the prestige of foolish but well-connected scientists. … Many of these theories have been killed off only when some decisive experiment exposed their incorrectness. […] the yeoman work in any science, and especially physics, is done by the experimentalist, who must keep the theoreticians honest.” (Michio Kaku)

vi. “The relation between experimentalists and theorists is often one of healthy competition for truth and less healthy competition for fame.” (Alvaro De Rujula)

vii. “Divided minds, getting lost on different paths, are losing the huge advantage that would result from their combined forces.” (Jean-Baptiste Biot)

viii. “If we are honest — and scientists have to be — we must admit that religion is a jumble of false assertions, with no basis in reality. The very idea of God is a product of the human imagination. It is quite understandable why primitive people, who were so much more exposed to the overpowering forces of nature than we are today, should have personified these forces in fear and trembling. But nowadays, when we understand so many natural processes, we have no need for such solutions. I can’t for the life of me see how the postulate of an Almighty God helps us in any way.” (Paul Dirac)

ix. “On your way towards becoming a bad theoretician, take your own immature theory, stop checking it for mistakes, don’t listen to colleagues who do spot weaknesses, and start admiring your own infallible intelligence.” (Gerardus ‘t Hooft, How to become a bad theoretical physicist)

x. “In practice, quantum mechanics merely gives predictions with probabilities attached. This should be considered as a normal and quite acceptable feature of predictions made by science: different possible outcomes with different probabilities. In the world that is familiar to us, we always have such a situation when we make predictions. Thus the question remains: What is the reality described by quantum theories? I claim that we can attribute the fact that our predictions come with probability distributions to the fact that not all relevant data for the predictions are known to us, in particular important features of the initial state.” (Gerardus ‘t Hooft)

xi. “Ask anyone today working on foundational questions in quantum theory and you are likely to hear that there is still no consensus on many of these questions—all the while, of course, everybody seems to be in perfect agreement on how to apply the quantum formalism when it comes to making experimental predictions.” (Maximilian Schlosshauer)

xii. “The last bastions of resistance to evolutionary theory are organized religion and cultural anthropology.” (Napoleon Chagnon)

xiii. “There are always alternative interpretations of the same data. It is often the case, however, that the alternatives that are rejected are treated as if they don’t exist. But they do. And we should be aware not only of their existence and potential viability, but of the possibility that the hypotheses that we might embrace so strongly today may very well be the rejects of tomorrow.” (Jeffrey H. Schwartz)

xiv. “Roughly, religion is a community’s costly and hard-to-fake commitment to a counterfactual and counterintuitive world of supernatural agents who master people’s existential anxieties, such as death and deception. […] The more one accepts what is materially false to be really true, and the more one spends material resources in displays of such acceptance, the more others consider one’s faith deep and one’s commitment sincere.”  (Scott Atran)

xv. “Cultures and religions do not exist apart from the individual minds that constitute them and the environments that constrain them, any more than biological species and varieties exist independently of the individual organisms that compose them and the environments that conform them. They are not well-bounded systems or definite clusters of beliefs, practices, and artifacts, but more or less regular distributions of causally connected thoughts, behaviors, material products, and environmental objects. To naturalistically understand what “cultures” are is to describe and explain the material causes responsible for reliable differences in these distributions.” (-ll-)

xvi. “If making money is a slow process, losing it is quickly done.” (Ihara Saikaku)

xvii. “Mankind has always made too much of its saints and heroes, and how the latter handle the fuss might be called their final test.” (Wilfrid Sheed)

xviii. “The bad debater never knows that one explanation is better than five.” (-ll-)

xix. “They say the first sentence in any speech is always the hardest. Well, that one’s behind me, anyway.” (Wisława Szymborska, Nobel lecture)

xx. “If you are a hard drinking man with lots of swastikas tattooed all over your torso you may want to consider that you are at risk for perforating your ulcer and that the good Dr. Rosenberg will be called in to save your life resulting in an awkward situation for everyone.” (‘docB’, things I learn from my patients)

December 22, 2017 Posted by | Books, Quotes/aphorisms | Leave a comment

Analgesia and Procedural Sedation

I didn’t actually like this lecture all that much, in part because I obviously disagree to some extent with the ideas expressed, but I try to remember to blog lectures I watch these days even if I don’t think they’re all that great. It’s a short lecture, but why not at least add a comment about urine drug screening and monitoring or patient selection/segmentation when you’re talking about patients whom you’re considering discharging with an opioid prescription? Recommending acupuncture in a pain management context? Etc.

Anyway, below a few links to stuff related to the coverage:

Pain Management in the Emergency Department.
Oligoanalgesia.
WHO analgesic ladder.
Nonsteroidal anti-inflammatory drug.
Ketorolac.
Fentanyl (“This medication should not be used to treat pain other than chronic cancer pain, especially short-term pain such as migraines or other headaches, pain from an injury, or pain after a medical or dental procedure.” …to put it mildly, that’s not the impression you get from watching this lecture…)
Parenteral opioids in emergency medicine – A systematic review of efficacy and safety.
Procedural Sedation (medscape).

December 22, 2017 Posted by | Lectures, Medicine, Pharmacology | Leave a comment

Civil engineering (II)

Some more quotes and links:

“Major earthquakes occur every year in different parts of the world. The various continents that make up the surface of the Earth are moving slowly relative to each other. The rough boundaries between the tectonic plates try to resist this relative motion but eventually the energy stored in the interface (or geological fault) becomes too big to resist and slip occurs, releasing the energy. The energy travels as a wave through the crust of the Earth, shaking the ground as it passes. The speed at which the wave travels depends on the stiffness and density of the material through which it is passing. Topographic effects may concentrate the energy of the shaking. Mexico City sits on the bed of a former lake, surrounded by hills. Once the energy reaches this bowl-like location it becomes trapped and causes much more damage than would be experienced if the city were sitting on a flat plain without the surrounding mountains. Designing a building to withstand earthquake shaking is possible, provided we have some idea about the nature and magnitude and geological origin of the loadings. […] Heavy mud or tile roofs on flimsy timber walls are a disaster – the mass of the roof sways from side to side as it picks up energy from the shaking ground and, in collapsing, flattens the occupants. Provision of some diagonal bracing to prevent the structure from deforming when it is shaken can be straightforward. Shops like to have open spaces for ground floor display areas. There are often post-earthquake pictures of buildings which have lost a storey as this unbraced ground floor structure collapsed. […] Earthquakes in developing countries tend to attract particular coverage. The extent of the damage caused is high because the enforcement of design codes (if they exist) is poor. […] The majority of the damage in Haiti was the result of poor construction and the total lack of any building code requirements.”

“[A]n aircraft is a large structure, and the structural design is subject to the same laws of equilibrium and material behaviour as any structure which is destined never to leave the ground. […] The A380 is an enormous structure, some 25 m high, 73 m long and with a wingspan of about 80 m […]. For comparison, St Paul’s Cathedral in London is 73 m wide at the transept; and the top of the inner dome, visible from inside the cathedral, is about 65 m above the floor of the nave. […] The rules of structural mechanics that govern the design of aircraft structures are no different from those that govern the design of structures that are intended to remain on the ground. In the mid 20th century many aircraft and civil structural engineers would not have recognized any serious intellectual boundary between their activities. The aerodynamic design of an aircraft ensures smooth flow of air over the structure to reduce resistance and provide lift. Bridges in exposed places are not in need of lift but can benefit from reduced resistance to air flow resulting from the use of continuous hollow sections (box girders) rather than trusses to form the deck. The stresses can also flow more smoothly within the box, and the steel be used more efficiently. Testing of potential box girder shapes in wind tunnels helps to check the influence of the presence of the ground or water not far below the deck on the character of the wind flow.”

“Engineering is concerned with finding solutions to problems. The initial problems faced by the engineer relate to the identification of the set of functional criteria which truly govern the design and which will be generated by the client or the promoter of the project. […] The more forcefully the criteria are stated the less freedom the design engineer will have in the search for an appropriate solution. Design is the translation of ideas into achievement. […] The designer starts with (or has access to) a mental store of solutions previously adopted for related problems and then seeks to compromise as necessary in order to find the optimum solution satisfying multiple criteria. The design process will often involve iteration of concept and technology and the investigation of radically different solutions and may also require consultation with the client concerning the possibility of modification of some of the imposed functional criteria if the problem has been too tightly defined. […] The term technology is being used here to represent that knowledge and those techniques which will be necessary in order to realize the concept; recognizing that a concept which has no appreciation of the technologies available for construction may require the development of new technologies in order that it may be realized. Civil engineering design continues through the realization of the project by the constructor or contractor. […] The process of design extends to the eventual assessment of the performance of the completed project as perceived by the client or user (who may not have been party to the original problem definition).”

“An arch or vault curved only in one direction transmits loads by means of forces developed within the thickness of the structure which then push outwards at the boundaries. A shell structure is a generalization of such a vault which is curved in more than one direction. An intact eggshell is very stiff under any loading applied orthogonally (at right angles) to the shell. If the eggshell is broken it becomes very flexible and to stiffen it again restraint is required along the free edge to replace the missing shell. The techniques of prestressing concrete permit the creation of very exciting and daring shell structures with extraordinarily small thickness but the curvatures of the shells and the shapes of the edges dictate the support requirements.”

“In the 19th century it was quicker to travel from Rome to Ancona by sea round the southern tip of the boot of Italy (a distance of at least 2000 km) than to travel overland, a distance of some 200 km as the crow flies. Land-based means of transport require infrastructure that must be planned and constructed and then maintained. Even today water transport is used on a large scale for bulky or heavy items for which speed is not necessary.”

“High speed rail works well (economically) in areas such as Europe and Japan where there is adequate infrastructure in the destination cities for access to and from the railway stations. In parts of the world – such as much of the USA – where the distances are much greater, population densities lower, railway networks much less developed, and local transport in cities much less coordinated (and the motor car has dominated for far longer) the economic case for high speed rail is harder to make. The most successful schemes for high speed rail have involved construction of new routes with dedicated track for the high speed trains with straighter alignments, smoother curves, and gentler gradients than conventional railways – and consequent reduced scope for delays resulting from mixing of high speed and low speed trains on the same track”.

“The Millennium Bridge is a suspension bridge with a very low sag-to-span ratio which lends itself very readily to sideways oscillation. There are plenty of rather bouncy suspension footbridges around the world but the modes of vibration are predominantly those in the plane of the bridge, involving vertical movements. Modes which involve lateral movement and twisting of the deck are always there but being out-of-plane may be overlooked. The more flexible the bridge in any mode of deformation, the more movement there is when people walk across. There is a tendency for people to vary their pace to match the movements of the bridge. Such an involuntary feedback mechanism is guaranteed to lead to resonance of the structure and continued build-up of movements. There will usually be some structural limitation on the magnitude of the oscillations – as the geometry of the bridge changes so the natural frequency will change subtly – but it can still be a bit alarming for the user. […] The Millennium Bridge was stabilized (retrofitted) by the addition of restraining members and additional damping mechanisms to prevent growth of oscillation and to move the natural frequency of this mode of vibration away from the likely frequencies of human footfall. The revised design […] ensured that dynamic response would be acceptable for crowd loading up to two people per square metre. At this density walking becomes difficult so it is seen as a conservative criterion.”

“The development of appropriately safe systems requires that […] parallel control systems should be truly independent so that they are not likely to fail simultaneously. Robustness is thus about ensuring that safety can be maintained even when some elements of the system cease to operate. […] There is a human element in all systems, providing some overall control and an ability to react in critical circumstances. The human intervention is particularly important where all electronic or computer control systems are eliminated and the clock is ticking inexorably towards disaster. Although ultimately whenever a structural failure occurs there is some purely mechanical explanation – some element of the structure was overloaded because some mode of response had been overlooked – there is often a significant human factor which must be considered. We may think that we fully understand the mechanical operation, but may neglect to ensure that the human elements are properly controlled. A requirement for robustness implies both that the damage consequent on the removal of a single element of the structure or system should not be disproportionate (mechanical or structural robustness) but also that the project should not be jeopardized by human failure (organizational robustness). […] A successful civil engineering project is likely to have evident robustness in concept, technology, and realization. A concept which is unclear, a technology in its infancy, and components of realization which lacks coherence will all contribute to potential disaster.”

“Tunnelling inevitably requires removal of ground from the face with a tendency for the ground above and ahead of the tunnel to fall into the gap. The success of the tunnelling operation can be expressed in terms of the volume loss: the proportion of the volume of the tunnel which is unintentionally excavated causing settlement at the ground surface – the smaller this figure the better. […] How can failure of the tunnel be avoided? One route to assurance will be to perform numerical analysis of the tunnel construction process with close simulations of all the stages of excavation and loading of the new structure. Computer analyses are popular because they appear simple to perform, even in three dimensions. However, such analyses can be no more reliable than the models of soil behaviour on which they are based and on the way in which the rugged detail of construction is translated into numerical instructions. […] Whatever one’s confidence in the numerical analysis it will obviously not be a bad idea to observe the tunnel while it is being constructed. Obvious things to observe include tunnel convergence – the change in the cross-section of the tunnel in different directions – and movements at the ground surface and existing buildings over the tunnel. […] observation is not of itself sufficient unless there is some structured strategy for dealing with the observations. At Heathrow […] the data were not interpreted until after the failure had occurred. It was then clear that significant and undesirable movements had been occurring and could have been detected at least two months before the failure.”

“Fatigue is a term used to describe a failure which develops as a result of repeated loading – possibly over many thousands or millions of cycles. […] Fatigue cannot be avoided, and the rate of development of damage may not be easy to predict. It often requires careful techniques of inspection to identify the presence of incipient cracks which may eventually prove structurally devastating.”

“Some projects would clearly be regarded as failures – a dam bursts, a flood protection dyke is overtopped, a building or bridge collapses. In each case there is the possibility of a technical description of the processes leading to the failure – in the end the strength of the material in some location has been exceeded by the demands of the applied loads or the load carrying paths have been disrupted. But failure can also be financial or economic. Such failures are less evident: a project that costs considerably more than the original estimate has in some way failed to meet its expectations. A project that, once built, is quite unable to generate the revenue that was expected in order to justify the original capital outlay has also failed.”

1999 Jiji earthquake.
Taipei 101. Tuned mass damper.
Tacoma Narrows Bridge (1940). Brooklyn Bridge. Golden Gate Bridge.
Sydney Opera House. Jørn Utzon. Ove Arup. Christiani & Nielsen.
Bell Rock Lighthouse. Northern Lighthouse Board. Richard Henry Brunton.
Panama Canal. Culebra Cut. Gatun Lake. Panamax.
Great Western Railway.
Shinkansen. TGV.
Ronan Point.
New Austrian tunnelling method.
Crossrail.
Fukushima Daiichi nuclear disaster.
Turnkey project. Unit price contract.
Colin Buchanan.
Dongtan.

December 21, 2017 Posted by | Books, Economics, Engineering, Geology | Leave a comment

Civil engineering (I)

I have included some quotes from the first half of the book below, and some links related to the book’s coverage:

“Today, the term ‘civil engineering’ distinguishes the engineering of the provision of infrastructure from […] many other branches of engineering that have come into existence. It thus has a somewhat narrower scope now than it had in the 18th and early 19th centuries. There is a tendency to define it by exclusion: civil engineering is not mechanical engineering, not electrical engineering, not aeronautical engineering, not chemical engineering… […] Civil engineering today is seen as encompassing much of the infrastructure of modern society provided it does not move – roads, buildings, dams, tunnels, drains, airports (but not aeroplanes or air traffic control), railways (but not railway engines or signalling), power stations (but not turbines). The fuzzy definition of civil engineering as the engineering of infrastructure […] should make us recognize that there are no precise boundaries and that any practising engineer is likely to have to communicate across whatever boundaries appear to have been created. […] The boundary with science is also fuzzy. Engineering is concerned with the solution of problems now, and cannot necessarily wait for the underlying science to catch up. […] All engineering is concerned with finding solutions to problems for which there is rarely a single answer. Presented with an appropriate ‘solution-neutral problem definition’ the engineer needs to find ways of applying existing or emergent technologies to the solution of the problem.”

“[T]he behaviour of the soil or other materials that make up the ground in its natural state is rather important to engineers. However, although it can be guessed from exploratory probings and from knowledge of the local geological history, the exact nature of the ground can never be discovered before construction begins. By contrast, road embankments are formed of carefully prepared soils; and water-retaining dams may also be constructed from selected soils and rocks – these can be seen as ‘designer soils’. […] Soils are formed of mineral particles packed together with surrounding voids – the particles can never pack perfectly. […] The voids around the soil particles are filled with either air or water or a mixture of the two. In northern climes the ground is saturated with water for much of the time. For deformation of the soil to occur, any change in volume must be accompanied by movement of water through and out of the voids. Clay particles are small, the surrounding voids are small, and movement of water through these voids is slow – the permeability is said to be low. If a new load, such as a bridge deck or a tall building, is to be constructed, the ground will want to react to the new loads. A clayey soil will be unable to react instantly because of the low permeability and, as a result, there will be delayed deformations as the water is squeezed out of the clay ground and the clay slowly consolidates. The consolidation of a thick clay layer may take centuries to approach completion.”

“Rock (or stone) is a good construction material. Evidently there are different types of rock with different strengths and different abilities to resist the decay that is encouraged by sunshine, moisture, and frost, but rocks are generally strong, dimensionally stable materials: they do not shrink or twist with time. We might measure the strength of a type of rock in terms of the height of a column of that rock that will just cause the lowest layer of the rock to crush: on such a scale sandstone would have a strength of about 2 kilometres, good limestone about 4 kilometres. A solid pyramid 150 m high uses quite a small proportion of this available strength. […] Iron has been used for several millennia for elements such as bars and chain links which might be used in conjunction with other structural materials, particularly stone. Stone is very strong when compressed, or pushed, but not so strong in tension: when it is pulled cracks may open up. The provision of iron links between adjacent stone blocks can help to provide some tensile strength. […] Cast iron can be formed into many different shapes and is resistant to rust but is brittle – when it breaks it loses all its strength very suddenly. Wrought iron, a mixture of iron with a low proportion of carbon, is more ductile – it can be stretched without losing all its strength – and can be beaten or rolled (wrought) into simple shapes. Steel is a mixture of iron with a higher proportion of carbon than wrought iron and with other elements […] which provide particular mechanical benefits. Mild steel has a remarkable ductility – a tolerance of being stretched – which results from its chemical composition and which allows it to be rolled into sheets or extruded into chosen shapes without losing its strength and stiffness. There are limits on the ratio of the quantities of carbon and other elements to that of the iron itself in order to maintain these desirable properties for the mixture. […] Steel is very strong and stiff in tension or pulling: steel wire and steel cables are obviously very well suited for hanging loads.”

“As concrete sets, the chemical reactions that turn a sloppy mixture of cement and water and stones into a rock-like solid produce a lot of heat. If a large volume of concrete is poured without any special precautions then, as it cools down, having solidified, it will shrink and crack. The Hoover Dam was built as a series of separate concrete columns of limited dimension through which pipes carrying cooling water were passed in order to control the temperature rise. […] Concrete is mixed as a heavy fluid with no strength until it starts to set. Embedding bars of a material such as steel, which is strong in tension, in the fluid concrete gives some tensile strength. Reinforced concrete is used today for huge amounts of construction throughout the world. When the amount of steel present in the concrete is substantial, additives are used to encourage the fresh concrete to flow through intricate spaces and form a good bond with the steel. For the steel to start to resist tensile loads it has to stretch a little; if the concrete around the steel also stretches it may crack. The concrete has little reliable tensile strength and is intended to protect the steel. The concrete can be used more efficiently if the steel reinforcement, in the form of cables or rods, is tensioned, either before the concrete has set or after the concrete has set but before it starts to carry its eventual live loads. The concrete is forced into compression by the stretched steel. […] Such prestressed concrete gives amazing possibilities for very slender and daring structures […] the concrete must be able to withstand the tension in the steel, whether or not the full working loads are being applied. For an arch bridge made from prestressed concrete, the prestress from the steel cables tries to lift up the concrete and reduce the span whereas the traffic loads on the bridge are trying to push it down and increase the span. The location and amount of the prestress has to be chosen to provide the optimum use of the available strength under all possible load combinations. The pressure vessels used to contain the central reactor of a nuclear power station provide a typical example of the application of prestressed concrete.”

“There are many civil engineering contributions required in the several elements of [a] power station […]. The electricity generation side of a nuclear power station is subject to exactly the same design constraints as any other power station. Pipework leading the steam and water through the plant has to be able to cope with severe temperature variations, rotating machinery requires foundations which not only have to be precisely aligned but also have to be able to tolerate the high frequency vibrations arising from the rotations. Residual small out-of-balance forces, transmitted to the foundation continuously over long periods, could degrade the stiffness of the ground. Every system has its resonant frequency at which applied cyclic loads will tend to be amplified, possibly uncontrollably, unless prevented by the damping properties of the foundation materials. Even if the rotating machinery is being operated well away from any resonant frequency under normal conditions, there will be start-up periods in which the frequency sweeps up from stationery, zero frequency, and so an undesirable resonance may be triggered on the way”.

“The material which we see so often on modern road surfaces, […] asphalt […], was introduced in the early 20th century. Binding together the surface layers of stones with bitumen or tar gave the running surface a better strength. Tar is a viscous material which deforms with time under load; ruts may form, particularly in hot weather. Special treatments can be used for the asphalt to reduce the surface noise made by tyres; porous asphalt can encourage drainage. On the other hand, a running surface that is more resistant to traffic loading can be provided with a concrete slab reinforced with a crisscross steel mesh to maintain its integrity between deliberately inserted construction joints, so that any cracking resulting from seasonal thermal contraction occurs at locations chosen by the engineer rather than randomly across the concrete slab. The initial costs of concrete road surfaces are higher than the asphalt alternatives but the full-life costs may be lower.”

“A good supply of fresh water is one essential element of civilized infrastructure; some control of the waste water from houses and industries is another. The two are, of course, not completely independent since one of the desirable requirements of a source of fresh water is that it should not have been contaminated with waste before it reaches its destination of consumption: hence the preference for long aqueducts or pipelines starting from natural springs, rather than taking water from rivers which were probably already contaminated by upstream conurbations. It is curious how often in history this lesson has had to be relearnt.”

“The object of controlled disposal is the same as for nuclear waste: to contain it and prevent any of the toxic constituents from finding their way into the food chain or into water supplies. Simply to remove everything that could possibly be contaminated and dump it to landfill seems the easy option, particularly if use can be made of abandoned quarries or other holes in the ground. But the quantities involved make this an unsustainable long-term proposition. Cities become surrounded with artificial hills of uncertain composition which are challenging to develop for industrial or residential purposes because decomposing waste often releases gases which may be combustible (and useful) or poisonous; because waste often contains toxic substances which have to be prevented from finding pathways to man either upwards to the air or sideways towards water supplies; because the properties of waste (whether or not decomposed or decomposing) are not easy to determine and probably not particularly desirable from an engineering point of view; and because developers much prefer greenfield sites to sites of uncertainty and contamination.”

“There are regularly more or less serious floods in different parts of the world. Some of these are simply the result of unusually high quantities of rainfall which overload the natural river channels, often exacerbated by changes in land use (such as the felling of areas of forest) which encourage more rapid runoff or impose a man-made canalization of the river (by building on flood plains into which the rising river would previously have been able to spill) […]. Some of the incidents are the result of unusual encroachments by the sea, a consequence of a combination of high tide and adverse wind and weather conditions. The potential for disastrous consequences is of course enhanced when both on-shore and off-shore circumstances combine. […] Folk memory for natural disasters tends to be quite short. If the interval between events is typically greater than, say, 5–10 years people may assume that such events are extraordinary and rare. They may suppose that building on the recently flooded plains will be safe for the foreseeable future.”

Links:

Civil engineering.
École Nationale des Ponts et Chaussées.
Institution of Civil Engineers.
Christopher Wren. John Smeaton. Thomas Telford. William Rankine.
Leaning Tower of Pisa.
Cruck. Trabeated system. Corbel. Voussoir. Flange. I-beam.
Hardwick Hall. Blackfriars Bridge. Forth Bridge. Sydney Harbour Bridge.
Gothic architecture.
Buckling.
Pozzolana. Concrete. Grout.
Gravity dam. Arch dam. Hoover Dam. Malpasset Dam.
Torness Nuclear Power Station.
Plastic. Carbon fiber reinforced polymer.
Roman roads. Via Appia.
Sanitation.
Aqueduct. Pont du Gard.
Charles Yelverton O’Connor. Goldfields Water Supply Scheme.
1854 Broad Street cholera outbreak. John Snow. Great Stink of 1858. Joseph Bazalgette.
Brent Spar.
Clywedog Reservoir.
Acqua alta.
North Sea flood of 1953. Hurricane Katrina.
Delta Works. Oosterscheldekering. Thames Barrier.
Groyne. Breakwater.

December 20, 2017 Posted by | Books, Economics, Engineering, Geology | Leave a comment

The Periodic Table

“After evolving for nearly 150 years through the work of numerous individuals, the periodic table remains at the heart of the study of chemistry. This is mainly because it is of immense practical benefit for making predictions about all manner of chemical and physical properties of the elements and possibilities for bond formation. Instead of having to learn the properties of the 100 or more elements, the modern chemist, or the student of chemistry, can make effective predictions from knowing the properties of typical members of each of the eight main groups and those of the transition metals and rare earth elements.”

I wasn’t very impressed with this book, but it wasn’t terrible. It didn’t include a lot of new stuff I didn’t already know and it focused in my opinion excessively on historical aspects; some of those things were interesting, for example the problems that confronted chemists trying to make sense of how best to categorize chemical elements in the late 19th century before the discovery of the neutron (the number of protons in the nucleus is not the same thing as the atomic weight of an atom – which was highly relevant because: “when it came to deciding upon the most important criterion for classifying the elements, Mendeleev insisted that atomic weight ordering would tolerate no exceptions”), but I’d have liked to learn a lot more about e.g. some of the chemical properties of the subgroups, instead of just revisiting stuff I’d learned earlier in other publications in the series. However I assume people who are new to chemistry – or who have forgot a lot, and would like to rectify this – might feel differently about the book and the way it covers the material included. However I don’t think this is one of the best publications in the physics/chemistry categories of this OUP series.

Some quotes and links below.

“Lavoisier held that an element should be defined as a material substance that has yet to be broken down into any more fundamental components. In 1789, Lavoisier published a list of 33 simple substances, or elements, according to this empirical criterion. […] the discovery of electricity enabled chemists to isolate many of the more reactive elements, which, unlike copper and iron, could not be obtained by heating their ores with charcoal (carbon). There have been a number of major episodes in the history of chemistry when half a dozen or so elements were discovered within a period of a few years. […] Following the discovery of radioactivity and nuclear fission, yet more elements were discovered. […] Today, we recognize about 90 naturally occurring elements. Moreover, an additional 25 or so elements have been artificially synthesized.”

“Chemical analogies between elements in the same group are […] of great interest in the field of medicine. For example, the element beryllium sits at the top of group 2 of the periodic table and above magnesium. Because of the similarity between these two elements, beryllium can replace the element magnesium that is essential to human beings. This behaviour accounts for one of the many ways in which beryllium is toxic to humans. Similarly, the element cadmium lies directly below zinc in the periodic table, with the result that cadmium can replace zinc in many vital enzymes. Similarities can also occur between elements lying in adjacent positions in rows of the periodic table. For example, platinum lies next to gold. It has long been known that an inorganic compound of platinum called cis-platin can cure various forms of cancer. As a result, many drugs have been developed in which gold atoms are made to take the place of platinum, and this has produced some successful new drugs. […] [R]ubidium […] lies directly below potassium in group 1 of the table. […] atoms of rubidium can mimic those of potassium, and so like potassium can easily be absorbed into the human body. This behaviour is exploited in monitoring techniques, since rubidium is attracted to cancers, especially those occurring in the brain.”

“Each horizontal row represents a single period of the table. On crossing a period, one passes from metals such as potassium and calcium on the left, through transition metals such as iron, cobalt, and nickel, then through some semi-metallic elements like germanium, and on to some non-metals such as arsenic, selenium, and bromine, on the right side of the table. In general, there is a smooth gradation in chemical and physical properties as a period is crossed, but exceptions to this general rule abound […] Metals themselves can vary from soft dull solids […] to hard shiny substances […]. Non-metals, on the other hand, tend to be solids or gases, such as carbon and oxygen respectively. In terms of their appearance, it is sometimes difficult to distinguish between solid metals and solid non-metals. […] The periodic trend from metals to non-metals is repeated with each period, so that when the rows are stacked, they form columns, or groups, of similar elements. Elements within a single group tend to share many important physical and chemical properties, although there are many exceptions.”

“There have been quite literally over 1,000 periodic tables published in print […] One of the ways of classifying the periodic tables that have been published is to consider three basic formats. First of all, there are the originally produced short-form tables published by the pioneers of the periodic table like Newlands, Lothar Meyer, and Mendeleev […] These tables essentially crammed all the then known elements into eight vertical columns or groups. […] As more information was gathered on the properties of the elements, and as more elements were discovered, a new kind of arrangement called the medium-long-form table […] began to gain prominence. Today, this form is almost completely ubiquitous. One odd feature is that the main body of the table does not contain all the elements. […] The ‘missing’ elements are grouped together in what looks like a separate footnote that lies below the main table. This act of separating off the rare earth elements, as they have traditionally been called, is performed purely for convenience. If it were not carried out, the periodic table would appear much wider, 32 elements wide to be precise, instead of 18 elements wide. The 32-wide element format does not lend itself readily to being reproduced on the inside cover of chemistry textbooks or on large wall-charts […] if the elements are shown in this expanded form, as they sometimes are, one has the long-form periodic table, which may be said to be more correct than the familiar medium-long form, in the sense that the sequence of elements is unbroken […] there are many forms of the periodic table, some designed for different uses. Whereas a chemist might favour a form that highlights the reactivity of the elements, an electrical engineer might wish to focus on similarities and patterns in electrical conductivities.”

“The periodic law states that after certain regular but varying intervals, the chemical elements show an approximate repetition in their properties. […] This periodic repetition of properties is the essential fact that underlies all aspects of the periodic system. […] The varying length of the periods of elements and the approximate nature of the repetition has caused some chemists to abandon the term ‘law’ in connection with chemical periodicity. Chemical periodicity may not seem as law-like as most laws of physics. […] A modern periodic table is much more than a collection of groups of elements showing similar chemical properties. In addition to what may be called ‘vertical relationships’, which embody triads of elements, a modern periodic table connects together groups of elements into an orderly sequence. A periodic table consists of a horizontal dimension, containing dissimilar elements, as well as a vertical dimension with similar elements.”

“[I]n modern terms, metals form positive ions by the loss of electrons, while non-metals gain electrons to form negative ions. Such oppositely charged ions combine together to form neutrally charged salts like sodium chloride or calcium bromide. There are further complementary aspects of metals and non-metals. Metal oxides or hydroxides dissolve in water to form bases, while non-metal oxides or hydroxides dissolve in water to form acids. An acid and a base react together in a ‘neutralization’ reaction to form a salt and water. Bases and acids, just like metals and non-metals from which they are formed, are also opposite but complementary.”

“[T]he law of constant proportion, [is] the fact that when two elements combine together, they do so in a constant ratio of their weights. […] The fact that macroscopic samples consist of a fixed ratio by weight of two elements reflects the fact that two particular atoms are combining many times over and, since they have particular masses, the product will also reflect that mass ratio. […] the law of multiple proportions [refers to the fact that] [w]hen one element A combines with another one, B, to form more than one compound, there is a simple ratio between the combining masses of B in the two compounds. For example, carbon and oxygen combine together to form carbon monoxide and carbon dioxide. The weight of combined oxygen in the dioxide is twice as much as the weight of combined oxygen in the monoxide.”

“One of his greatest triumphs, and perhaps the one that he is best remembered for, is Mendeleev’s correct prediction of the existence of several new elements. In addition, he corrected the atomic weights of some elements as well as relocating other elements to new positions within the periodic table. […] But not all of Mendeleev’s predictions were so dramatically successful, a feature that seems to be omitted from most popular accounts of the history of the periodic table. […] he was unsuccessful in as many as nine out of his eighteen published predictions […] some of the elements involved the rare earths which resemble each other very closely and which posed a major challenge to the periodic table for many years to come. […] The discovery of the inert gases at the end of the 19th century [also] represented an interesting challenge to the periodic system […] in spite of Mendeleev’s dramatic predictions of many other elements, he completely failed to predict this entire group of elements (He, Ne, Ar, Kr, Xe, Rn). Moreover, nobody else predicted these elements or even suspected their existence. The first of them to be isolated was argon, in 1894 […] Mendeleev […] could not accept the notion that elements could be converted into different ones. In fact, after the Curies began to report experiments that suggested the breaking up of atoms, Mendeleev travelled to Paris to see the evidence for himself, close to the end of his life. It is not clear whether he accepted this radical new notion even after his visit to the Curie laboratory.”

“While chemists had been using atomic weights to order the elements there had been a great deal of uncertainty about just how many elements remained to be discovered. This was due to the irregular gaps that occurred between the values of the atomic weights of successive elements in the periodic table. This complication disappeared when the switch was made to using atomic number. Now the gaps between successive elements became perfectly regular, namely one unit of atomic number. […] The discovery of isotopes […] came about partly as a matter of necessity. The new developments in atomic physics led to the discovery of a number of new elements such as Ra, Po, Rn, and Ac which easily assumed their rightful places in the periodic table. But in addition, 30 or so more apparent new elements were discovered over a short period of time. These new species were given provisional names like thorium emanation, radium emanation, actinium X, uranium X, thorium X, and so on, to indicate the elements which seemed to be producing them. […] To Soddy, the chemical inseparability [of such elements] meant only one thing, namely that these were two forms, or more, of the same chemical element. In 1913, he coined the term ‘isotopes’ to signify two or more atoms of the same element which were chemically completely inseparable, but which had different atomic weights.”

“The popular view reinforced in most textbooks is that chemistry is nothing but physics ‘deep down’ and that all chemical phenomena, and especially the periodic system, can be developed on the basis of quantum mechanics. […] This is important because chemistry books, especially textbooks aimed at teaching, tend to give the impression that our current explanation of the periodic system is essentially complete. This is just not the case […] the energies of the quantum states for any many-electron atom can be approximately calculated from first principles although there is extremely good agreement with observed energy values. Nevertheless, some global aspects of the periodic table have still not been derived from first principles to this day. […] We know where the periods close because we know that the noble gases occur at elements 2, 10, 18, 36, 54, etc. Similarly, we have a knowledge of the order of orbital filling from observations but not from theory. The conclusion, seldom acknowledged in textbook accounts of the explanation of the periodic table, is that quantum physics only partly explains the periodic table. Nobody has yet deduced the order of orbital filling from the principles of quantum mechanics. […] The situation that exists today is that chemistry, and in particular the periodic table, is regarded as being fully explained by quantum mechanics. Even though this is not quite the case, the explanatory role that the theory continues to play is quite undeniable. But what seems to be forgotten […] is that the periodic table led to the development of many aspects of modern quantum mechanics, and so it is rather short-sighted to insist that only the latter explains the former.”

“[N]uclei with an odd number of protons are invariably more unstable than those with an even number of protons. This difference in stability occurs because protons, like electrons, have a spin of one half and enter into energy orbitals, two by two, with opposite spins. It follows that even numbers of protons frequently produce total spins of zero and hence more stable nuclei than those with unpaired proton spins as occurs in nuclei with odd numbers of protons […] The larger the nuclear charge, the faster the motion of inner shell electrons. As a consequence of gaining relativistic speeds, such inner electrons are drawn closer to the nucleus, and this in turn has the effect of causing greater screening on the outermost electrons which determine the chemical properties of any particular element. It has been predicted that some atoms should behave chemically in a manner that is unexpected from their presumed positions in the periodic table. Relativistic effects thus pose the latest challenge to test the universality of the periodic table. […] The conclusion [however] seem to be that chemical periodicity is a remarkably robust phenomenon.”

Some links:

Periodic table.
History of the periodic table.
IUPAC.
Jöns Jacob Berzelius.
Valence (chemistry).
Equivalent weight. Atomic weight. Atomic number.
Rare-earth element. Transuranium element. Glenn T. Seaborg. Island of stability.
Old quantum theory. Quantum mechanics. Electron configuration.
Benjamin Richter. John Dalton. Joseph Louis Gay-Lussac. Amedeo Avogadro. Leopold Gmelin. Alexandre-Émile Béguyer de Chancourtois. John Newlands. Gustavus Detlef Hinrichs. Julius Lothar Meyer. Dmitri Mendeleev. Henry Moseley. Antonius van den Broek.
Diatomic molecule.
Prout’s hypothesis.
Döbereiner’s triads.
Karlsruhe Congress.
Noble gas.
Einstein’s theory of Brownian motion. Jean Baptiste Perrin.
Quantum number. Molecular orbitals. Madelung energy ordering rule.
Gilbert N. Lewis. (“G. N. Lewis is possibly the most significant chemist of the 20th century not to have been awarded a Nobel Prize.”) Irving Langmuir. Niels Bohr. Erwin Schrödinger.
Ionization energy.
Synthetic element.
Alternative periodic tables.
Group 3 element.

December 18, 2017 Posted by | Books, Chemistry, Medicine, Physics | Leave a comment

Nuclear Power (II)

This is my second and last post about the book. Some more links and quotes below.

“Many of the currently operating reactors were built in the late 1960s and 1970s. With a global hiatus on nuclear reactor construction following the Three Mile Island incident and the Chernobyl disaster, there is a dearth of nuclear power replacement capacity as the present fleet faces decommissioning. Nuclear power stations, like coal-, gas-, and oil-fired stations, produce heat to generate electricity and all require water for cooling. The US Geological Survey estimates that this use of water for cooling power stations accounts for over 3% of all water consumption. Most nuclear power plants are built close to the sea so that the ocean can be used as a heat dump. […] The need for such large quantities of water inhibits the use of nuclear power in arid regions of the world. […] The higher the operating temperature, the greater the water usage. […] [L]arge coal, gas and nuclear plants […] can consume millions of litres per hour”.

“A nuclear reactor is utilizing the strength of the force between nucleons while hydrocarbon burning is relying on the chemical bonding between molecules. Since the nuclear bonding is of the order of a million times stronger than the chemical bonding, the mass of hydrocarbon fuel necessary to produce a given amount of energy is about a million times greater than the equivalent mass of nuclear fuel. Thus, while a coal station might burn millions of tonnes of coal per year, a nuclear station with the same power output might consume a few tonnes.”

“There are a number of reasons why one might wish to reprocess the spent nuclear fuel. These include: to produce plutonium either for nuclear weapons or, increasingly, as a fuel-component for fast reactors; the recycling of all actinides for fast-breeder reactors, closing the nuclear fuel cycle, greatly increasing the energy extracted from natural uranium; the recycling of plutonium in order to produce mixed oxide fuels for thermal reactors; recovering enriched uranium from spent fuel to be recycled through thermal reactors; to extract expensive isotopes which are of value to medicine, agriculture, and industry. An integral part of this process is the management of the radioactive waste. Currently 40% of all nuclear fuel is obtained by reprocessing. […] The La Hague site is the largest reprocessing site in the world, with over half the global capacity at 1,700 tonnes of spent fuel per year. […] The world’s largest user of nuclear power, the USA, currently does not reprocess its fuel and hence produces [large] quantities of radioactive waste. […] The principal reprocessors of radioactive waste are France and the UK. Both countries receive material from other countries and after reprocessing return the raffinate to the country of origin for final disposition.”

“Nearly 45,000 tonnes of uranium are mined annually. More than half comes from the three largest producers, Canada, Kazakhstan, and Australia.”

“The designs of nuclear installations are required to be passed by national nuclear licensing agencies. These include strict safety and security features. The international standard for the integrity of a nuclear power plant is that it would withstand the crash of a Boeing 747 Jumbo Jet without the release of hazardous radiation beyond the site boundary. […] At Fukushima, the design was to current safety standards, taking into account the possibility of a severe earthquake; what had not been allowed for was the simultaneous tsunami strike.”

“The costing of nuclear power is notoriously controversial. Opponents point to the past large investments made in nuclear research and would like to factor this into the cost. There are always arguments about whether or not decommissioning costs and waste-management costs have been properly accounted for. […] which electricity source is most economical will vary from country to country […]. As with all industrial processes, there can be economies of scale. In the USA, and particularly in the UK, these economies of scale were never fully realized. In the UK, while several Magnox and AGR reactors were built, no two were of exactly the same design, resulting in no economies in construction costs, component manufacture, or staff training programmes. The issue is compounded by the high cost of licensing new designs. […] in France, the Regulatory Commission agreed a standard design for all plants and used a safety engineering process similar to that used for licensing aircraft. Public debate was thereafter restricted to local site issues. Economies of scale were achieved.”

“[C]onstruction costs […] are the largest single factor in the cost of nuclear electricity generation. […] Because the raw fuel is such a small fraction of the cost of nuclear power generation, the cost of electricity is not very sensitive to the cost of uranium, unlike the fossil fuels, for which fuel can represent up to 70% of the cost. Operating costs for nuclear plants have fallen dramatically as the French practice of standardization of design has spread. […] Generation III+ reactors are claimed to be half the size and capable of being built in much shorter times than the traditional PWRs. The 2008 contracted capital cost of building new plants containing two AP1000 reactors in the USA is around $10–$14billion, […] There is considerable experience of decommissioning of nuclear plants. In the USA, the cost of decommissioning a power plant is approximately $350 million. […] In France and Sweden, decommissioning costs are estimated to be 10–15% of construction costs and are included in the price charged for electricity. […] The UK has by far the highest estimates for decommissioning which are set at £1 billion per reactor. This exceptionally high figure is in part due to the much larger reactor core associated with graphite moderated piles. […] It is clear that in many countries nuclear-generated electricity is commercially competitive with fossil fuels despite the need to include the cost of capital and all waste disposal and decommissioning (factors that are not normally included for other fuels). […] At the present time, without the market of taxes and grants, electricity generated from renewable sources is generally more expensive than that from nuclear power or fossil fuels. This leaves the question: if nuclear power is so competitive, why is there not a global rush to build new nuclear power stations? The answer lies in the time taken to recoup investments. Investors in a new gas-fired power station can expect to recover their investment within 15 years. Because of the high capital start-up costs, nuclear power stations yield a slower rate of return, even though over the lifetime of the plant the return may be greater.”

“Throughout the 20th century, the population and GDP growth combined to drive the [global] demand for energy to increase at a rate of 4% per annum […]. The most conservative estimate is that the demand for energy will see global energy requirements double between 2000 and 2050. […] The demand for electricity is growing at twice the rate of the demand for energy. […] More than two-thirds of all electricity is generated by burning fossil fuels. […] The most rapidly growing renewable source of electricity generation is wind power […] wind is an intermittent source of electricity. […] The intermittency of wind power leads to [a] problem. The grid management has to supply a steady flow of electricity. Intermittency requires a heavy overhead on grid management, and there are serious concerns about the ability of national grids to cope with more than a 20% contribution from wind power. […] As for the other renewables, solar and geothermal power, significant electricity generation will be restricted to latitudes 40°S to 40°N and regions of suitable geological structures, respectively. Solar power and geothermal power are expected to increase but will remain a small fraction of the total electricity supply. […] In most industrialized nations, the current electricity supply is via a regional, national, or international grid. The electricity is generated in large (~1GW) power stations. This is a highly efficient means of electricity generation and distribution. If the renewable sources of electricity generation are to become significant, then a major restructuring of the distribution infrastructure will be necessary. While local ‘microgeneration’ can have significant benefits for small communities, it is not practical for the large-scale needs of big industrial cities in which most of the world’s population live.”

“Electricity cannot be stored in large quantities. If the installed generating capacity is designed to meet peak demand, there will be periods when the full capacity is not required. In most industrial countries, the average demand is only about one-third of peak consumption.”

Links:

Nuclear reprocessing. La Hague site. Radioactive waste. Yucca Mountain nuclear waste repository.
Bismuth phosphate process.
Nuclear decommissioning.
Uranium mining. Open-pit mining.
Wigner effect (Wigner heating). Windscale fire. Three Mile Island accident. Chernobyl disaster. Fukushima Daiichi nuclear disaster.
Fail-safe (engineering).
Treaty on the Non-Proliferation of Nuclear Weapons.
Economics of nuclear power plants.
Fusion power. Tokamak. ITER. High Power laser Energy Research facility (HiPER).
Properties of plasma.
Klystron.
World energy consumption by fuel source. Renewable energy.

 

December 16, 2017 Posted by | Books, Chemistry, Economics, Engineering, Physics | Leave a comment

Occupational Epidemiology (III)

This will be my last post about the book.

Some observations from the final chapters:

“Often there is confusion about the difference between systematic reviews and metaanalyses. A meta-analysis is a quantitative synthesis of two or more studies […] A systematic review is a synthesis of evidence on the effects of an intervention or an exposure which may also include a meta-analysis, but this is not a prerequisite. It may be that the results of the studies which have been included in a systematic review are reported in such a way that it is impossible to synthesize them quantitatively. They can then be reported in a narrative manner.10 However, a meta-analysis always requires a systematic review of the literature. […] There is a long history of debate about the value of meta-analysis for occupational cohort studies or other occupational aetiological studies. In 1994, Shapiro argued that ‘meta-analysis of published non-experimental data should be abandoned’. He reasoned that ‘relative risks of low magnitude (say, less than 2) are virtually beyond the resolving power of the epidemiological microscope because we can seldom demonstrably eliminate all sources of bias’.13 Because the pooling of studies in a meta-analysis increases statistical power, the pooled estimate may easily become significant and thus incorrectly taken as an indication of causality, even though the biases in the included studies may not have been taken into account. Others have argued that the method of meta-analysis is important but should be applied appropriately, taking into account the biases in individual studies.14 […] We believe that the synthesis of aetiological studies should be based on the same general principles as for intervention studies, and the existing methods adapted to the particular challenges of cohort and case-control studies. […] Since 2004, there is a special entity, the Cochrane Occupational Safety and Health Review Group, that is responsible for the preparing and updating of reviews of occupational safety and health interventions […]. There were over 100 systematic reviews on these topics in the Cochrane Library in 2012.”

“The believability of a systematic review’s results depends largely on the quality of the included studies. Therefore, assessing and reporting on the quality of the included studies is important. For intervention studies, randomized trials are regarded as of higher quality than observational studies, and the conduct of the study (e.g. in terms of response rate or completeness of follow-up) also influences quality. A conclusion derived from a few high-quality studies will be more reliable than when the conclusion is based on even a large number of low-quality studies. Some form of quality assessment is nowadays commonplace in intervention reviews but is still often missing in reviews of aetiological studies. […] It is tempting to use quality scores, such as the Jadad scale for RCTs34 and the Downs and Black scale for non-RCT intervention studies35 but these, in their original format, are insensitive to variation in the importance of risk areas for a given research question. The score system may give the same value to two studies (say, 10 out of 12) when one, for example, lacked blinding and the other did not randomize, thus implying that their quality is equal. This would not be a problem if randomization and blinding were equally important for all questions in all reviews, but this is not the case. For RCTs an important development in this regard has been the Cochrane risk of bias tool.36 This is a checklist of six important domains that have been shown to be important areas of bias in RCTs: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, and selective reporting.”

“[R]isks of bias tools developed for intervention studies cannot be used for reviews of aetiological studies without relevant modification. This is because, unlike interventions, exposures are usually more complicated to assess when we want to attribute the outcome to them alone. These scales do not cover all items that may need assessment in an aetiological study, such as confounding and information bias relating to exposures. […] Surprisingly little methodological work has been done to develop validated tools for aetiological epidemiology and most tools in use are not validated,38 […] Two separate checklists, for observational studies of incidence and prevalence and for risk factor assessment, have been developed and validated recently.40 […] Publication and other reporting bias is probably a much bigger issue for aetiological studies than for intervention studies. This is because, for clinical trials, the introduction of protocol registration, coupled with the regulatory system for new medications, has helped in assessing and preventing publication and reporting bias. No such checks exist for observational studies.”

“Most ill health that arises from occupational exposures can also arise from nonoccupational exposures, and the same type of exposure can occur in occupational and non-occupational settings. With the exception of malignant mesothelioma (which is essentially only caused by exposure to asbestos), there is no way to determine which exposure caused a particular disorder, nor where the causative exposure occurred. This means that usually it is not possible to determine the burden just by counting the number of cases. Instead, approaches to estimating this burden have been developed. There are also several ways to define burden and how best to measure it.”

“The population attributable fraction (PAF) is the proportion of cases that would not have occurred in the absence of an occupational exposure. It can be estimated by combining two measures — a risk estimate (usually relative risk (RR) or odds ratio) of the disorder of interest that is associated with exposure to the substance of concern; and an estimate of the proportion of the population exposed to the substance at work (p(E)). This approach has been used in several studies, particularly for estimating cancer burden […] There are several possible equations that can be used to calculate the PAF, depending on the available data […] PAFs cannot in general be combined by summing directly because: (1) summing PAFs for overlapping exposures (i.e. agents to which the same ‘ever exposed’ workers may have been exposed) may give an overall PAF exceeding 100%, and (2) summing disjoint (not concurrently occurring) exposures also introduces upward bias. Strategies to avoid this include partitioning exposed numbers between overlapping exposures […] or estimating only for the ‘dominant’ carcinogen with the highest risk. Where multiple exposures remain, one approach is to assume that the exposures are independent and their joint effects are multiplicative. The PAFs can then be combined to give an overall PAF for that cancer using a product sum. […] Potential sources of bias for PAFs include inappropriate choice of risk estimates, imprecision in the risk estimates and estimates of proportions exposed, inaccurate risk exposure period and latency assumptions, and a lack of separate risk estimates in some cases for women and/or cancer incidence. In addition, a key decision is the choice of which diseases and exposures are to be included.”

“The British Cancer Burden study is perhaps the most detailed study of occupationally related cancers in that it includes all those relevant carcinogens classified at the end of 2008 […] In the British study the attributable fractions ranged from less than 0.01% to 95% overall, the most important cancer sites for occupational attribution being, for men, mesothelioma (97%), sinonasal (46%), lung (21.1%), bladder (7.1%), and non-melanoma skin cancer (7.1%) and, for women, mesothelioma (83%), sinonasal (20.1%), lung (5.3%), breast (4.6%), and nasopharynx (2.5%). Occupation also contributed 2% or more overall to cancers of the larynx, oesophagus, and stomach, and soft tissue sarcoma with, in addition for men, melanoma of the eye (due to welding), and non-Hodgkin lymphoma. […] The overall results from the occupational risk factors component of the Global Burden of Disease 2010 study illustrate several important aspects of burden studies.14 Of the estimated 850 000 occupationally related deaths worldwide, the top three causes were: (1) injuries (just over a half of all deaths); (2) particulate matter, gases, and fumes leading to COPD; and (3) carcinogens. When DALYs were used as the burden measure, injuries still accounted for the highest proportion (just over one-third), but ergonomic factors leading to low back pain resulted in almost as many DALYs, and both were almost an order of magnitude higher than the DALYs from carcinogens. The difference in relative contributions of the various risk factors between deaths and DALYs arises because of the varying ages of those affected, and the differing chronicity of the resulting conditions. Both measures are valid, but they represent a different aspect of the burden arising from the hazardous exposures […]. Both the British and Global Burden of Disease studies draw attention to the important issues of: (1) multiple occupational carcinogens causing specific types of cancer, for example, the British study evaluated 21 lung carcinogens; and (2) specific carcinogens causing several different cancers, for example, IARC now defines asbestos as a group 1 or 2A carcinogen for seven cancer sites. These issues require careful consideration for burden estimation and for prioritizing risk reduction strategies. […] The long latency of many cancers means that estimates of current burden are based on exposures occurring in the past, often much higher than those existing today. […] long latency [also] means that risk reduction measures taken now will take a considerable time to be reflected in reduced disease incidence.”

“Exposures and effects are linked by dynamic processes occurring across time. These processes can often be usefully decomposed into two distinct biological relationships, each with several components: 1. The exposure-dose relationship […] 2. The dose-effect relationship […] These two component relationships are sometimes represented by two different mathematical models: a toxicokinetic model […], and a disease process model […]. Depending on the information available, these models may be relatively simple or highly complex. […] Often the various steps in the disease process do not occur at the same rate, some of these processes are ‘fast’, such as cell killing, while others are ‘slow’, such as damage repair. Frequently a few slow steps in a process become limiting to the overall rate, which sets the temporal pattern for the entire exposure-response relationship. […] It is not necessary to know the full mechanism of effects to guide selection of an exposure-response model or exposure metric. Because of the strong influence of the rate-limiting steps, often it is only necessary to have observations on the approximate time course of effects. This is true whether the effects appear to be reversible or irreversible, and whether damage progresses proportionately with each unit of exposure (actually dose) or instead occurs suddenly, and seemingly without regard to the amount of exposure, such as an asthma attack.”

“In this chapter, we argue that formal disease process models have the potential to improve the sensitivity of epidemiology for detecting new and emerging occupational and environmental risks where there is limited mechanistic information. […] In our approach, these models are often used to create exposure or dose metrics, which are in turn used in epidemiological models to estimate exposure-disease associations. […] Our goal is a methodology to formulate strong tests of our exposure-disease hypotheses in which a hypothesis is developed in as much biological detail as it can be, expressed in a suitable dynamic (temporal) model, and tested by its fit with a rich data set, so that its flaws and misperceptions of reality are fully displayed. Rejecting such a fully developed biological hypothesis is more informative than either rejecting or failing to reject a generic or vaguely defined hypothesis.” For example, the hypothesis ‘truck drivers have more risk of lung cancer than non-drivers’13 is of limited usefulness for prevention […]. Hypothesizing that a particular chemical agent in truck exhaust is associated with lung cancer — whether the hypothesis is refuted or supported by data — is more likely to lead to successful prevention activities. […] we believe that the choice of models against which to compare the data should, so far as possible, be guided by explicit hypotheses about the underlying biological processes. In other words, you can get as much as possible from epidemiology by starting from well-thought-out hypotheses that are formalized as mathematical models into which the data will be placed. The disease process models can serve this purpose.2″

“The basic idea of empirical Bayes (EB) and semiBayes (SB) adjustments for multiple associations is that the observed variation of the estimated relative risks around their geometric mean is larger than the variation of the true (but unknown) relative risks. In SB adjustments, an a priori value for the extra variation is chosen which assigns a reasonable range of variation to the true relative risks and this value is then used to adjust the observed relative risks.7 The adjustment consists in shrinking outlying relative risks towards the overall mean (of the relative risks for all the different exposures being considered). The larger the individual variance of the relative risks, the stronger the shrinkage, so that the shrinkage is stronger for less reliable estimates based on small numbers. Typical applications in which SB adjustments are a useful alternative to traditional methods of adjustment for multiple comparisons are in large occupational surveillance studies, where many relative risks are estimated with few or no a priori beliefs about which associations might be causal.7″

“The advantage of [the SB adjustment] approach over classical Bonferroni corrections is that on the average it produces more valid estimates of the odds ratio for each occupation/exposure. If we do a study which involves assessing hundreds of occupations, the problem is not only that we get many ‘false positive’ results by chance. A second problem is that even the ‘true positives’ tend to have odds ratios that are too high. For example, if we have a group of occupations with true odds ratios around 1.5, then the ones that stand out in the analysis are those with the highest odds ratios (e.g. 2.5) which will be elevated partly because of real effects and partly by chance. The Bonferroni correction addresses the first problem (too many chance findings) but not the second, that the strongest odds ratios are probably too high. In contrast, SB adjustment addresses the second problem by correcting for the anticipated regression to the mean that would have occurred if the study had been repeated, and thereby on the average produces more valid odds ratio estimates for each occupation/exposure. […] most epidemiologists write their Methods and Results sections as frequentists and their Introduction and Discussion sections as Bayesians. In their Methods and Results sections, they ‘test’ their findings as if their data are the only data that exist. In the Introduction and Discussion, they discuss their findings with regard to their consistency with previous studies, as well as other issues such as biological plausibility. This creates tensions when a small study has findings which are not statistically significant but which are consistent with prior knowledge, or when a study finds statistically significant findings which are inconsistent with prior knowledge. […] In some (but not all) instances, things can be made clearer if we include Bayesian methods formally in the Methods and Results sections of our papers”.

“In epidemiology, risk is most often quantified in terms of relative risk — i.e. the ratio of the probability of an adverse outcome in someone with a specified exposure to that in someone who is unexposed, or exposed at a different specified level. […] Relative risks can be estimated from a wider range of study designs than individual attributable risks. They have the advantage that they are often stable across different groups of people (e.g. of different ages, smokers, and non-smokers) which makes them easier to estimate and quantify. Moreover, high relative risks are generally unlikely to be explained by unrecognized bias or confounding. […] However, individual attributable risks are a more relevant measure by which to quantify the impact of decisions in risk management on individuals. […] Individual attributable risk is the difference in the probability of an adverse outcome between someone with a specified exposure and someone who is unexposed, or exposed at a different specified level. It is the critical measure when considering the impact of decisions in risk management on individuals. […] Population attributable risk is the difference in the frequency of an adverse outcome between a population with a given distribution of exposures to a hazardous agent, and that in a population with no exposure, or some other specified distribution of exposures. It depends on the prevalence of exposure at different levels within the population, and on the individual attributable risk for each level of exposure. It is a measure of the impact of the agent at a population level, and is relevant to decisions in risk management for populations. […] Population attributable risks are highest when a high proportion of a population is exposed at levels which carry high individual attributable risks. On the other hand, an exposure which carries a high individual attributable risk may produce only a small population attributable risk if the prevalence of such exposure is low.”

“Hazard characterization entails quantification of risks in relation to routes, levels, and durations of exposure. […] The findings from individual studies are often used to determine a no observed adverse effect level (NOAEL), lowest observed effect level (LOEL), or benchmark dose lower 95% confidence limit (BMDL) for relevant effects […] [NOAEL] is the highest dose or exposure concentration at which there is no discernible adverse effect. […] [LOEL] is the lowest dose or exposure concentration at which a discernible effect is observed. If comparison with unexposed controls indicates adverse effects at all of the dose levels in an experiment, a NOAEL cannot be derived, but the lowest dose constitutes a LOEL, which might be used as a comparator for estimated exposures or to derive a toxicological reference value […] A BMDL is defined in relation to a specified adverse outcome that is observed in a study. Usually, this is the outcome which occurs at the lowest levels of exposure and which is considered critical to the assessment of risk. Statistical modelling is applied to the experimental data to estimate the dose or exposure concentration which produces a specified small level of effect […]. The BMDL is the lower 95% confidence limit for this estimate. As such, it depends both on the toxicity of the test chemical […], and also on the sample sizes used in the study (other things being equal, larger sample sizes will produce more precise estimates, and therefore higher BMDLs). In addition to accounting for sample size, BMDLs have the merit that they exploit all of the data points in a study, and do not depend so critically on the spacing of doses that is adopted in the experimental design (by definition a NOAEL or LOEL can only be at one of the limited number of dose levels used in the experiment). On the other hand, BMDLs can only be calculated where an adverse effect is observed. Even if there are no clear adverse effects at any dose level, a NOAEL can be derived (it will be the highest dose administered).”

December 8, 2017 Posted by | Books, Cancer/oncology, Epidemiology, Medicine, Statistics | Leave a comment

Nuclear power (I)

I originally gave the book 2 stars, but after I had finished this post I changed that rating to 3 stars (which was not that surprising; already when I wrote my goodreads review shortly after having read the book I was conflicted about whether or not the book deserved the third star). One thing that kept me from giving the book a higher rating was that I thought that the author did not spend enough time on ‘the basic concepts’, a problem I also highlighted in my goodreads review. I’d fortunately recently covered some of those concepts in other books in the series, so it wasn’t too hard for me to follow what was going on, but as sometimes happens for authors of books in this series, I think the author simply was trying to cover too much stuff. But even so this is a nice introductory text on this topic.

I have added some links and quotes related to the first half or so of the book below. I prepared the link list before I started gathering quotes for my coverage, so there may be more overlap in terms of which topics are covered both in the quotes and the links than there usually is (I normally tend to reserve the links for topics and concepts which are covered in these books that I don’t find it necessary to cover in detail in the text – the links are meant to remind me/indicate which sort of topics are also covered in the book, aside from the topics included in the text coverage).

“According to Einstein’s mass–energy equation, the mass of any composite stable object has to be less than the sum of the masses of the parts; the difference is the binding energy of the object. […] The general features of the binding energies are simply understood as follows. We have seen that the measured radii of nuclei [increase] with the cube root of the mass number A. This is consistent with a structure of close packed nucleons. If each nucleon could only interact with its closest neighbours, the total binding energy would then itself be proportional to the number of nucleons. However, this would be an overestimate because nucleons at the surface of the nucleus would not have a complete set of nearest neighbours with which to interact […]. The binding energy would be reduced by the number of surface nucleons and this would be proportional to the surface area, itself proportional to A2/3. So far we have considered only the attractive short-range nuclear binding. However, the protons carry an electric charge and hence experience an electrical repulsion between each other. The electrical force between two protons is much weaker than the nuclear force at short distances but dominates at larger distances. Furthermore, the total electrical contribution increases with the number of pairs of protons.”

“The main characteristics of the empirical binding energy of nuclei […] can now be explained. For the very light nuclei, all the nucleons are in the surface, the electrical repulsion is negligible, and the binding energy increases as the volume and number of nucleons increases. Next, the surface effects start to slow the rate of growth of the binding energy yielding a region of most stable nuclei near charge number Z = 28 (iron). Finally, the electrical repulsion steadily increases until we reach the most massive stable nucleus (lead-208). Between iron and lead, not only does the binding energy decrease so also do the proton to neutron ratios since the neutrons do not experience the electrical repulsion. […] as the nuclei get heavier the Coulomb repulsion term requires an increasing number of neutrons for stability […] For an explanation of [the] peaks, we must turn to the quantum nature of the problem. […] Filled shells corresponded to particularly stable electronic structures […] In the nuclear case, a shell structure also exists separately for both the neutrons and the protons. […] Closed-shell nuclei are referred to as ‘magic number’ nuclei. […] there is a particular stability for nuclei with equal numbers of protons and neutrons.”

“As we move off the line of stable nuclei, by adding or subtracting neutrons, the isotopes become increasingly less stable indicated by increasing levels of beta radioactivity. Nuclei with a surfeit of neutrons emit an electron, hence converting one of the neutrons into a proton, while isotopes with a neutron deficiency can emit a positron with the conversion of a proton into a neutron. For the heavier nuclei, the neutron to proton ratio can be reduced by emitting an alpha particle. All nuclei heavier than lead are unstable and hence radioactive alpha emitters. […] The fact that almost all the radioactive isotopes heavier than lead follow [a] kind of decay chain and end up as stable isotopes of lead explains this element’s anomalously high natural abundance.”

“When two particles collide, they transfer energy and momentum between themselves. […] If the target is much lighter than the projectile, the projectile sweeps it aside with little loss of energy and momentum. If the target is much heavier than the projectile, the projectile simply bounces off the target with little loss of energy. The maximum transfer of energy occurs when the target and the projectile have the same mass. In trying to slow down the neutrons, we need to pass them through a moderator containing scattering centres of a similar mass. The obvious candidate is hydrogen, in which the single proton of the nucleus is the particle closest in mass to the neutron. At first glance, it would appear that water, with its low cost and high hydrogen content, would be the ideal moderator. There is a problem, however. Slow neutrons can combine with protons to form an isotope of hydrogen, deuterium. This removes neutrons from the chain reaction. To overcome this, the uranium fuel has to be enriched by increasing the proportion of uranium-235; this is expensive and technically difficult. An alternative is to use heavy water, that is, water in which the hydrogen is replaced by deuterium. It is not quite as effective as a moderator but it does not absorb neutrons. Heavy water is more expensive and its production more technically demanding than natural water. Finally, graphite (carbon) has a mass of 12 and hence is less efficient requiring a larger reactor core, but it is inexpensive and easily available.”

“[During the Manhattan Project,] Oak Ridge, Tennessee, was chosen as the facility to develop techniques for uranium enrichment (increasing the relative abundance of uranium-235) […] a giant gaseous diffusion facility was developed. Gaseous uranium hexafluoride was forced through a semi permeable membrane. The lighter isotopes passed through faster and at each pass through the membrane the uranium hexafluoride became more and more enriched. The technology is very energy consuming […]. At its peak, Oak Ridge consumed more electricity than New York and Washington DC combined. Almost one-third of all enriched uranium is still produced by this now obsolete technology. The bulk of enriched uranium today is produced in high-speed centrifuges which require much less energy.”

“In order to sustain a nuclear chain reaction, it is essential to have a critical mass of fissile material. This mass depends upon the fissile fuel being used and the topology of the structure containing it. […] The chain reaction is maintained by the neutrons and many of these leave the surface without contributing to the reaction chain. Surrounding the fissile material with a blanket of neutron reflecting material, such as beryllium metal, will keep the neutrons in play and reduce the critical mass. Partially enriched uranium will have an increased critical mass and natural uranium (0.7% uranium-235) will not go critical at any mass without a moderator to increase the number of slow neutrons which are the dominant fission triggers. The critical mass can also be decreased by compressing the fissile material.”

“It is now more than 50 years since operations of the first civil nuclear reactor began. In the intervening years, several hundred reactors have been operating, in total amounting to nearly 50 million hours of experience. This cumulative experience has led to significant advances in reactor design. Different reactor types are defined by their choice of fuel, moderator, control rods, and coolant systems. The major advances leading to greater efficiency, increased economy, and improved safety are referred to as ‘generations’. […] [F]irst generation reactors […] had the dual purpose to make electricity for public consumption and plutonium for the Cold War stockpiles of nuclear weapons. Many of the features of the design were incorporated to meet the need for plutonium production. These impacted on the electricity-generating cost and efficiency. The most important of these was the use of unenriched uranium due to the lack of large-scale enrichment plants in the UK, and the high uranium-238 content was helpful in the plutonium production but made the electricity generation less efficient.”

PWRs, BWRs, and VVERs are known as LWRs (Light Water Reactors). LWRs dominate the world’s nuclear power programme, with the USA operating 69 PWRs and 35 BWRs; Japan operates 63 LWRs, the bulk of which are BWRs; and France has 59 PWRs. Between them, these three countries generate 56% of the world’s nuclear power. […] In the 1990s, a series of advanced versions of the Generation II and III reactors began to receive certification. These included the ACR (Advanced CANDU Reactor), the EPR (European Pressurized Reactor), and Westinghouse AP1000 and APR1400 reactors (all developments of the PWR) and ESBWR (a development of the BWR). […] The ACR uses slightly enriched uranium and a light water coolant, allowing the core to be halved in size for the same power output. […] It would appear that two of the Generation III+ reactors, the EPR […] and AP1000, are set to dominate the world market for the next 20 years. […] […] the EPR is considerably safer than current reactor designs. […] A major advance is that the generation 3+ reactors produce only about 10 % of waste compared with earlier versions of LWRs. […] China has officially adopted the AP1000 design as a standard for future nuclear plants and has indicated a wish to see 100 nuclear plants under construction or in operation by 2020.”

“All thermal electricity-generating systems are examples of heat engines. A heat engine takes energy from a high-temperature environment to a low-temperature environment and in the process converts some of the energy into mechanical work. […] In general, the efficiency of the thermal cycle increases as the temperature difference between the low-temperature environment and the high-temperature environment increases. In PWRs, and nearly all thermal electricity-generating plants, the efficiency of the thermal cycle is 30–35%. At the much higher operating temperatures of Generation IV reactors, typically 850–10000C, it is hoped to increase this to 45–50%.
During the operation of a thermal nuclear reactor, there can be a build-up of fission products known as reactor poisons. These are materials with a large capacity to absorb neutrons and this can slow down the chain reaction; in extremes, it can lead to a complete close-down. Two important poisons are xenon-135 and samarium-149. […] During steady state operation, […] xenon builds up to an equilibrium level in 40–50 hours when a balance is reached between […] production […] and the burn-up of xenon by neutron capture. If the power of the reactor is increased, the amount of xenon increases to a higher equilibrium and the process is reversed if the power is reduced. If the reactor is shut down the burn-up of xenon ceases, but the build-up of xenon continues from the decay of iodine. Restarting the reactor is impeded by the higher level of xenon poisoning. Hence it is desirable to keep reactors running at full capacity as long as possible and to have the capacity to reload fuel while the reactor is on line. […] Nuclear plants operate at highest efficiency when operated continually close to maximum generating capacity. They are thus ideal for provision of base load. If their output is significantly reduced, then the build-up of reactor poisons can impact on their efficiency.”

Links:

Radioactivity. Alpha decay. Beta decay. Gamma decay. Free neutron decay.
Periodic table.
Rutherford scattering.
Isotope.
Neutrino. Positron. Antineutrino.
Binding energy.
Mass–energy equivalence.
Electron shell.
Decay chain.
Heisenberg uncertainty principle.
Otto Hahn. Lise Meitner. Fritz Strassman. Enrico Fermi. Leo Szilárd. Otto Frisch. Rudolf Peierls.
Uranium 238. Uranium 235. Plutonium.
Nuclear fission.
Chicago Pile 1.
Manhattan Project.
Uranium hexafluoride.
Heavy water.
Nuclear reactor coolant. Control rod.
Critical mass. Nuclear chain reaction.
Magnox reactor. UNGG reactor. CANDU reactor.
ZEEP.
Nuclear reactor classifications (a lot of the distinctions included in this article are also included in the book and described in some detail. The topics included here are also covered extensively).
USS Nautilus.
Nuclear fuel cycle.
Thorium-based nuclear power.
Heat engine. Thermodynamic cycle. Thermal efficiency.
Reactor poisoning. Xenon 135. Samarium 149.
Base load.

December 7, 2017 Posted by | Books, Chemistry, Engineering, Physics | Leave a comment