On the cryptographic hardness of finding a Nash equilibrium

I found it annoying that you generally can’t really hear the questions posed by the audience (which includes people like Avi Wigderson), especially considering that there are quite a few of these, especially in the middle section of the lecture. There are intermittent issues with the camera’s focus occasionally throughout the talk, but those are all transitory problems that should not keep you from watching the lecture. The sound issue at the beginning of the talk is resolved after 40 seconds.

One important take-away from this talk, if you choose not to watch it: “to date, there is no known efficient algorithm to find Nash equilibrium in games”. In general this paper – coauthored by the lecturer – seems from a brief skim to cover many of the topics also included in the lecture. I have added some other links to articles and topics covered/mentioned in the lecture below.

Nash’s Existence Theorem.
Reducibility Among Equilibrium Problems (Goldberg & Papadimitriou).
Three-Player Games Are Hard (Daskalakis & Papadimitriou).
3-Nash is PPAD-Complete (Chen & Deng).
PPAD (complexity).
On the (Im)possibility of Obfuscating Programs (Barak et al.).
On the Impossibility of Obfuscation with Auxiliary Input (Goldwasser & Kalai).
On Best-Possible Obfuscation (Goldwasser & Rothblum).
Functional Encryption without Obfuscation (Garg et al.).
On the Complexity of the Parity Argument and Other Inefficient Proofs of Existence (Papadimitriou).
Pseudorandom function family.
Revisiting the Cryptographic Hardness of Finding a Nash Equilibrium (Garg, Pandei & Srinivasan).
Constrained Pseudorandom Functions and Their Applications (Boneh & Waters).
Delegatable Pseudorandom Functions and Applications (Kiayias et al.).
Functional Signatures and Pseudorandom Functions (Boyle, Goldwasser & Ivan).
Universal Constructions and Robust Combiners for Indistinguishability Obfuscation and Witness Encryption (Ananth et al.).


April 18, 2018 Posted by | Computer science, Cryptography, Game theory, Lectures, Mathematics, Papers | Leave a comment

100 cases in surgery (II)

Below I have added some links and quotes related to the last half of the book’s coverage.

Ischemic rest pain. (“Rest pain indicates inadequate tissue perfusion. *Urgent investigation and treatment is required to salvage the limb. […] The material of choice for bypass grafting is autogenous vein. […] The long-term patency of prosthetic grafts is inferior compared with autogenous vein.”)
Deep vein thrombosis.
Lymphedema. (“In lymphoedema, the vast majority of patients (>90 per cent) are treated conservatively. […] Debulking operations […] are only considered for a selected few patients where the function of the limb is impaired or those with recurrent attacks of severe cellulitis.”)
Varicose veins. Trendelenburg Test. (“Surgery on the superficial venous system should be avoided in patients with an incompetent deep venous system.”)
Testicular Torsion.
Benign Prostatic Hyperplasia.
Acute pyelonephritis. (“In patients with recurrent infection in the urinary system, significant pathology needs excluding such as malignancy, urinary tract stone disease and abnormal urinary tract anatomy.”)
Renal cell carcinomavon Hippel-Lindau syndrome. (“Approximately one-quarter to one-third of patients with renal cell carcinomas have metastases at presentation. […] The classic presenting triad of loin pain, a mass and haematuria only occurs in about 10 per cent of patients. More commonly, one of these features appears in isolation.”)
Haematuria. (“When taking the history, it is important to elicit the following: •Visible or non-visible: duration of haematuria • Age: cancers are more common with increasing age •Sex: females more likely to have urinary tract infections• Location: during micturition, was the haematuria always present (indicative of renal, ureteric or bladder pathology) or was it only present initially (suggestive of anterior urethral pathology) or present at the end of the stream (posterior urethra, bladder neck)? •Pain: more often associated with infection/inflammation/calculi, whereas malignancy tends to be painless •Associated lower urinary tract symptoms that will be helpful in determining aetiology •History of trauma Travel abroad […] •Previous urological surgery/history/recent instrumentation/prostatic biopsy •Medication, e.g. anticoagulants •Family history •Occupational history, e.g. rubber/dye occupational hazards are risk factors for developing transitional carcinoma of the bladder […] •Smoking status: increased risk, particularly of bladder cancer •General status, e.g. weight loss, reduced appetite […] Anticoagulation can often unmask other pathology in the urinary tract. […] Patients on oral anticoagulation who develop haematuria still require investigation.”)
Urinary retention. (“Acute and chronic retention are usually differentiated by the presence or absence of pain. Acute retention is painful, unlike chronic retention, when the bladder accommodates the increase in volume over time.”)
Colles’ fracture/Distal Radius Fractures. (“In all fractures the distal neurological and vascular status should be assessed.”)
Osteoarthritis. (“Radiological evidence of osteoarthritis is common, with 80 per cent of individuals over 80 years demonstrating some evidence of the condition. […] The commonest symptoms are pain, a reduction in mobility, and deformity of the affected joint.”)
Simmonds’ test.
Patella fracture.
Dislocated shoulder.
Femur fracture. (“Fractured neck of the femur is a relatively common injury following a fall in the elderly population. The rate of hip fracture doubles every decade from the age of 50 years. There is a female preponderance of three to one. […] it is important to take a comprehensive history, concentrating on the mechanism of injury. It is incorrect to assume that all falls are mechanical; it is not uncommon to find that the cause of the fall is actually due to a urinary or chest infection or even a silent myocardial infarction.”)
The Ottawa Ankle Rules.
Septic arthritis.
Carpal tunnel syndrome. Tinel’s test. Phalen’s Test. (“It is important, when examining a patient with suspected carpal tunnel syndrome, to carefully examine their neck, shoulder, and axilla. […] the source of the neurological compression may be proximal to the carpal tunnel”)
Acute Compartment Syndrome. (“Within the limbs there are a number of myofascial compartments. These consist of muscles contained within a relatively fixed-volume structure, bounded by fascial layers and bone. After trauma the pressure in the myofascial compartment increases. This pressure may exceed the venous capillary pressure, resulting in a loss of venous outflow from the compartment. The failure to clear metabolites also leads to the accumulation of fluid as a result of osmosis. If left untreated, the pressure will eventually exceed arterial pressure, leading to significant tissue ischaemia. The damage is irreversible after 4–6 h. Tibial fractures are the commonest cause of an acute compartment syndrome, which is thought to complicate up to 20 per cent of these injuries. […] The classical description of ‘pain out of proportion to the injury’ may [unfortunately] be difficult to determine if the clinician is inexperienced.”)
Hemarthrosis. (“Most knee injuries result in swelling which develops over hours rather than minutes. [A] history of immediate knee swelling suggests that there is a haemarthrosis.”)
Sickle cell crisis.
Cervical Spine Fracture. Neurogenic shock. NEXUS Criteria for C-Spine Imaging.
Slipped Capital Femoral Epiphysis. Trethowan sign. (“At any age, a limp in a child should always be taken seriously.”)

ATLS guidelines. (“The ATLS protocol should be followed even in the presence of obvious limb deformity, to ensure a potentially life-threatening injury is not missed.”)
Peritonsillar Abscess.
Epistaxis. Little’s area.
Croup. Acute epiglottitis. (“Acute epiglottitis is an absolute emergency and is usually caused by Haemophilus influenzae. There is significant swelling, and any attempt to examine the throat may result in airway obstruction. […] In adults it tends to cause a supraglottitis. It has a rapid progression and can lead to total airway obstruction. […] Stridor is an ominous sign and needs to be taken seriously.”)
Bell’s palsy.
Subarachnoid hemorrhageInternational subarachnoid aneurysm trial.
Chronic subdural hematoma. (“This condition is twice as common in men as women. Risk factors include chronic alcoholism, epilepsy, anticoagulant therapy (including aspirin) and thrombocytopenia. CSDH is more common in elderly patients due to cerebral atrophy. […] Initial misdiagnosis is, unfortunately, quite common. […] a chronic subdural haematoma should be suspected in confused patients with a history of a fall.”)
Extradural Haematoma. Cushing response. (“A direct blow to the temporo-parietal area is the commonest cause of an extradural haematoma. The bleed is normally arterial in origin. In 85 per cent of cases there is an associated skull fracture that causes damage to the middle meningeal artery. […] This situation represents a neurosurgical emergency. Without urgent decompression the patient will die. Unlike the chronic subdural, which can be treated with Burr hole drainage, the more dense acute arterial haematoma requires a craniotomy in order to evacuate it.”)
Cauda equina syndromeNeurosurgery for Cauda Equina Syndrome.
ASA classification. (“Patients having an operation within 3 months of a myocardial infarction carry a 30 per cent risk of reinfarction or cardiac death. This drops to 5 per cent after 6 months. […] Patients with COPD have difficulty clearing secretions from the lungs during the postoperative period. They also have a higher risk of basal atelectasis and are more prone to chest infections. These factors in combination with postoperative pain (especially in thoracic or abdominal major surgery) make them prone to respiratory complications. […] Patients with diabetes have an increased risk of postoperative complications because of the presence of microvascular and macrovascular disease: •Atherosclerosis: ischaemic heart disease/peripheral vascular disease/cerebrovascular disease •Nephropathy: renal insufficiency […] •Autonomic neuropathy: gastroparesis, decreased bladder tone •Peripheral neuropathy: lower-extremity ulceration, infection, gangrene •Poor wound healingIncreased risk of infection Tight glycaemic control (6–10 mmol/L) and the prevention of hypoglycaemia are critical in preventing perioperative and postoperative complications. The patient with diabetes should be placed first on the operating list to avoid prolonged fasting.
MalnutritionHartmann’s procedure. (“Malnutrition leads to delayed wound healing, reduced ventilatory capacity, reduced immunity and an increased risk of infection. […] The two main methods of feeding are either by the enteral route or the parenteral route. Enteral feeding is via the gastrointestinal tract. It is less expensive and is associated with fewer complications than feeding by the parenteral route. […] The parenteral route should only be used if there is an inability to ingest, digest, absorb or propulse nutrients through the gastrointestinal tract. It can be administered by either a peripheral or central line. Peripheral parenteral nutrition can cause thrombophlebitis […] Sepsis is the most frequent and serious complication of centrally administered parenteral nutrition.”)
Acute Kidney Injury. (“The aetiology of acute renal failure can be thought of in three main categories: •Pre-renal: the glomerular filtration is reduced because of poor renal perfusion. This is usually caused by hypovolaemia as a result of acute blood loss, fluid depletion or hypotension. […] • Renal: this is the result of damage directly to the glomerulus or tubule. The use of drugs such as NSAIDs, contrast agents or aminoglycosides all have direct nephrotoxic effects. Acute tubular necrosis can occur as a result of prolonged hypoperfusion […]. Pre-existing renal disease such as diabetic nephropathy or glomerulonephritis makes patients more susceptible to further renal injury. •Post-renal: this can be simply the result of a blocked catheter. […] Calculi, blood clots, ureteric ligation and prostatic hypertrophy can also all lead to obstruction of urinary flow.”)
Post-operative ileus.

Pulmonary embolism.

April 18, 2018 Posted by | Books, Cancer/oncology, Cardiology, Gastroenterology, Infectious disease, Medicine, Nephrology, Neurology | Leave a comment

Medical Statistics (II)

In this post I’ll include some links and quotes related to topics covered in chapters 2 and 3 of the book. Chapter 2 is about ‘Collecting data’ and chapter 3 is about ‘Handling data: what steps are important?’

“Data collection is a key part of the research process, and the collection method will impact on later statistical analysis of the data. […] Think about the anticipated data analysis [in advance] so that data are collected in the appropriate format, e.g. if a mean will be needed for the analysis, then don’t record the data in categories, record the actual value. […] *It is useful to pilot the data collection process in a range of circumstances to make sure it will work in practice. *This usually involves trialling the data collection form on a smaller sample than intended for the study and enables problems with the data collection form to be identified and resolved prior to main data collection […] In general don’t expect the person filling out the form to do calculations as this may lead to errors, e.g. calculating a length of time between two dates. Instead, record each piece of information to allow computation of the particular value later […] The coding scheme should be designed at the same time as the form so that it can be built into the form. […] It may be important to distinguish between data that are simply missing from the original source and data that the data extractor failed to record. This can be achieved using different codes […] The use of numerical codes for non-numerical data may give the false impression that these data can be treated as if they were numerical data in the statistical analysis. This is not so.”

“It is critical that data quality is monitored and that this happens as the study progresses. It may be too late if problems are only discovered at the analysis stage. If checks are made during the data collection then problems can be corrected. More frequent checks may be worthwhile at the beginning of data collection when processes may be new and staff may be less experienced. […] The layout […] affects questionnaire completion rates and therefore impacts on the overall quality of the data collected.”

“Sometimes researchers need to develop a new measurement or questionnaire scale […] To do this rigorously requires a thorough process. We will outline the main steps here and note the most common statistical measures used in the process. […] Face validity *Is the scale measuring what it sets out to measure? […] Content validity *Does the scale cover all the relevant areas? […] *Between-observers consistency: is there agreement between different observers assessing the same individuals? *Within-observers consistency: is there agreement between assessments on the same individuals by the same observer on two different occasions? *Test-retest consistency: are assessments made on two separate occasions on the same individual similar? […] If a scale has several questions or items which all address the same issue then we usually expect each individual to get similar scores for those questions, i.e. we expect their responses to be internally consistent. […] Cronbach’s alpha […] is often used to assess the degree of internal consistency. [It] is calculated as an average of all correlations among the different questions on the scale. […] *Values are usually expected to be above 0.7 and below 0.9 *Alpha below 0.7 broadly indicates poor internal consistency *Alpha above 0.9 suggests that the items are very similar and perhaps fewer items could be used to obtain the same overall information”.

Bland–Altman plot.
Coefficient of variation.
Intraclass correlation.
Cohen’s kappa.
Likert scale. (“The key characteristic of Likert scales is that the scale is symmetrical. […] Care is needed when analyzing Likert scale data even though a numerical code is assigned to the responses, since the data are ordinal and discrete. Hence an average may be misleading […] It is quite common to collapse Likert scales into two or three categories such as agree versus disagree, but this has the disadvantage that data are discarded.”)
Visual analogue scale. (“VAS scores can be treated like continuous data […] Where it is feasible to use a VAS, it is preferable as it provides greater statistical power than a categorical scale”)

“Correct handling of data is essential to produce valid and reliable statistics. […] Data from research studies need to be coded […] It is important to document the coding scheme for categorical variables such as sex where it will not be obviously [sic, US] what the values mean […] It is strongly recommended that a unique numerical identifier is given to each subject, even if the research is conducted anonymously. […] Computerized datasets are often stored in a spreadsheet format with rows and columns of data. For most statistical analyses it is best to enter the data so that each row represents a different subject and each column a different variable. […] Prefixes or suffixes can be used to denote […] repeated measurements. If there are several repeated variables, use the same ‘scheme’ for all to avoid confusion. […] Try to avoid mixing suffixes and prefixes as it can cause confusion.”

“When data are entered onto a computer at different times it may be necessary to join datasets together. […] It is important to avoid over-writing a current dataset with a new updated version without keeping the old version as a separate file […] the two datasets must use exactly the same variable names for the same variables and the same coding. Any spelling mistakes will prevent a successful joining. […] It is worth checking that the joining has worked as expected by checking that the total number of observations in the updated file is the sum of the two previous files, and that the total number of variables is unchanged. […] When new data are collected on the same individuals at a later stage […], it may [again] be necessary to merge datasets. In order to do this the unique subject identifier must be used to identify the records that must be matched. For the merge to work, all variable names in the two datasets must be different except for the unique identifier. […] Spreadsheets are useful for entering and storing data. However, care should be taken when cutting and pasting different datasets to avoid misalignment of data. […] it is best not to join or sort datasets using a spreadsheet […in some research contexts, I’d add, this is also just plain impossible to even try, due to the amount of data involved – US…] […] It is important to ensure that a unique copy of the current file, the ‘master copy’, is stored at all times. Where the study involves more than one investigator, everyone needs to know who has responsibility for this. It is also important to avoid having two people revising the same file at the same time. […] It is important to keep a record of any changes that are made to the dataset and keep dated copies of datasets as changes are made […] Don’t overwrite datasets with edited versions as older versions may be needed later on.”

“Where possible, it is important to do some [data entry] checks early on to leave time for addressing problems while the study is in progress. […] *Check a random sample of forms for data entry accuracy. If this reveals problems then further checking may be needed. […] If feasible, consider checking data entry forms for key variables, e.g. the primary outcome. […] Range checks: […] tabulate all data to ensure there are no invalid values […] make sure responses are consistent with each other within subjects, e.g. check for any impossible or unlikely combination of responses such as a male with a pregnancy […] Check where feasible that any gaps are true gaps and not missed data entry […] Sometimes finding one error may lead to others being uncovered. For example, if a spreadsheet was used for data entry and one entry was missed, all following entries may be in the wrong columns. Hence, always consider if the discovery of one error may imply that there are others. […] Plots can be useful for checking larger datasets.”

Data monitoring committee.
Damocles guidelines.
Overview of stopping rules for clinical trials.
Pocock boundary.
Haybittle–Peto boundary.

“Trials are only stopped early when it is considered that the evidence for either benefit or harm is overwhelmingly strong. In such cases, the effect size will inevitably be larger than anticipated at the outset of the trial in order to trigger the early stop. Hence effect estimates from trials stopped early tend to be more extreme than would be the case if these trials had continued to the end, and so estimates of the efficacy or harm of a particular treatment may be exaggerated. This phenomenon has been demonstrated in recent reviews.1,2 […] Sometimes it becomes apparent part way through a trial that the assumptions made in the original sample size calculations are not correct. For example, where the primary outcome is a continuous variable, an estimate of the standard deviation (SD) is needed to calculate the required sample size. When the data are summarized during the trial, it may become apparent that the observed SD is different from that expected. This has implications for the statistical power. If the observed SD is smaller than expected then it may be reasonable to reduce the sample size but if it is bigger then it may be necessary to increase it.”

April 16, 2018 Posted by | Books, Medicine, Statistics | Leave a comment

100 cases in surgery (I)

We hope this book will give a good introduction to common surgical conditions seen in everyday surgical practice. Each question has been followed up with a brief overview of the condition and its immediate management. The book should act as an essential revision aid for surgical finals and as a basis for practising surgery after qualification.

This book is far from the first book I read in this series, and the format is the same as usual: There are 100 cases included, with a variety of different organ systems and diagnoses/settings encountered. The first page of a case presents a basic history and some key findings (lab tests, x-rays, results of imaging studies) and asks you a few questions about the case; the second and sometimes third page then provides answers to the questions and some important observations of note. Cases have of course been chosen in order to illustrate a wide variety of different medical scenarios involving many different organ systems and types of complaints. All cases are ‘to some extent’ surgical in nature, but in far from all cases will surgery necessarily be the required/indicated treatment option in the specific context; sometimes non-surgical management will be preferable, sometimes (much too often, in some oncological settings..) tumours are not resectable, some of the cases deal with complications to surgical procedures, etc.

The degree with which I was familiar with the topics covered in the book was highly variable; I’ve never really read any previous medical textbooks (…more or less-) exclusively devoted to surgical topics, but I have previously in a variety of contexts read about topics such as neurosurgery, cardiovascular surgery, and the recent endocrinology text of course covered surgical topics within this field in some detail; on the other hand my knowledge of (e.g.) otorhinolaryngology is, well, …limited. Part of my motivation for having a go at this book was precisely that my knowledge of the field of surgery felt a bit too fragmented (…and, in some cases, non-existent) even if I still didn’t feel like reading, say, an 800-page handbook like this one on these topics. Despite the more modest page-count of this book I would caution against thinking this is a particularly easy/fast read; there are a lot of cases and each of them has something to teach you – and as should also be easily inferred from the quote from the preface included above, this book is probably not readable if you don’t have some medical background of one kind or another (‘read fluent medical textbook’).

Below I have added some links to topics covered in the first half of the book, as well as a few observations from the coverage.

Abdominal hernias.
Large-bowel obstruction. Small-bowel obstruction.
Perianal abscess.
Malignant melanoma. (“Factors in the history that are suggestive of malignant change in a mole[:] *Change in surface *itching *increase in size/shape/thickness *Change in colour *bleeding/ulceration *brown/pink halo […] *enlarged local lymph nodes”)
Meckel’s diverticulum.
Rectal cancer. Colorectal Cancer. (“Colorectal cancer is the second commonest cancer causing death in the UK […]. Right-sided lesions can present with iron-deficiency anaemia, weight loss or a right iliac fossa mass. Lef-sided lesions present with alteration in bowel habit, rectal bleeding, or as an emergency with obstruction or perforation.”)
Sigmoid and cecal volvulus.
Anal fissure.
Diverticular disease.
Crohn Disease Pathology. (“Increasing frequency of stool, anorexia, low-grade fever, abdominal tenderness and anaemia suggest an inflammatory bowel disease. […] The initial management of uncomplicated Crohn’s disease should be medical.”)
Ulcerative colitis. (“Long-standing ulcerative colitis carries an approximate 3 per cent risk of malignant change after 10 years”).
Acute Cholecystitis and Biliary Colic. (“The majority of episodes of acute cholecystitis settle with analgesia and antibiotics.”)
Acute pancreatitis. (“Ranson’s criteria are used to grade the severity of alcoholic pancreatitis […] Each fulfilled criterion scores a point and the total indicates the severity. […] Estimates on mortality are based on the number of points scored: 0–2 = 2 per cent; 3–4 = 15 per cent; 5–6 = 40 per cent; >7 = 100 per cent. […] The aim of treatment is to halt the progression of local inflammation into systemic inflammation, which can result in multi-organ failure. Patients will often require nursing in a high-dependency or intensive care unit. They require prompt fluid resuscitation, a urinary catheter and central venous pressure monitoring. Early enteral feeding is advocated by some specialists. If there is evidence of sepsis, the patient should receive broad-spectrum antibiotics. […] patients should be managed aggressively”)
Ascending cholangitis.
Surgical Treatment of Perforated Peptic Ulcer.
Splenic rupture. Kehr’s sign.
Barrett’s esophagus. Peptic strictures of the esophagus. (“Proton pump inhibitors are effective in reducing stricture recurrence and in the treatment of Barrett’s oesophagus. If frequent dilatations are required despite acid suppression, then surgery should be considered. […] The risk of cancer is increased by up to 30 times in patients with Barrett’s oesophagus. If Barrett’s oesophagus is found at endoscopy, then the patient should be started on lifelong acid suppression. The patient should then have endoscopic surveillance to detect dysplasia before progression to carcinoma.”)
Esophageal Cancer. (“oesophageal carcinoma […] typically affects patients between 60 and 70 years of age and has a higher incidence in males. […] Dysphagia is the most common presenting symptom and is often associated with weight loss. […] Approximately 40 per cent of patients are suitable for surgical resection.”)
Pancreatic cancer. Courvoisier’s law. (“Pancreatic cancer classically presents with painless jaundice from biliary obstruction at the head of the pancreas and is associated with a distended gallbladder. Patients with pancreatic cancer can also present with epigastric pain, radiating through to the back, and vomiting due to duodenal obstruction. Pancreatic cancer occurs in patients between 60 and 80 years of age […] Roughly three-quarters have metastases at presentation […] Only approximately 15 per cent of pancreatic malignancies are surgically resectable.”)
Chronic pancreatitis. (“Chronic pancreatitis is an irreversible inflammation causing pancreatic fibrosis and calcification. Patients usually present with chronic abdominal pain and normal or mildly elevated pancreatic enzyme levels. The pancreas may have lost its endocrine and exocrine function, leading to diabetes mellitus and steatorrhea. […] The mean age of onset is 40 years, with a male preponderance of 4:1. […] thirty per cent of cases of chronic pancreatitis are idiopathic.”)
Gastric cancer. (“Gastric carcinoma is the second commonest cause of cancer worldwide. […] The highest incidence is in Eastern Asia, with a falling incidence in Western Europe. Diet and H. pylori infection are thought to be the two most important environmental factors in the development of gastric cancer. Diets rich in pickled vegetables, salted fish and smoked meats are thought to predispose to gastric cancer. […] Gastric cancer typically presents late and is associated with a poor prognosis. […] Surgical resection is not possible in the majority of patients.”)
Fibroadenomas of the breast. (“On examination, [benign fibroadenomas] tend to be spherical, smooth and sometimes lobulated with a rubbery consistency. The differential diagnosis includes fibrocystic disease (fluctuation in size with menstrual cycle and often associated with mild tenderness), a breast cyst (smooth, well-defined consistency like fibroadenoma but a hard as opposed to a rubbery consistency) or breast carcinoma (irregular, indistinct surface and shape with hard consistency).”)
Graves’ disease. (“Patients often present with many symptoms including palpitations, anxiety, thirst, sweating, weight loss, heat intolerance and increased bowel frequency. Enhanced activity of the adrenergic system also leads to agitation and restlessness. Approximately 25–30 per cent of patients with Graves’ disease have clinical evidence of ophthalmopathy. This almost only occurs in Graves’ disease (very rarely found in hypothyroidism)”)
Ruptured abdominal aortic aneurysm: a surgical emergency with many clinical presentations.
Temporal arteritis.
Transient ischemic attack. (“A stenosis of more than 70 per cent in the internal carotid artery is an indication for carotid endarterectomy in a patient with TIAs […]. The procedure should be carried out as soon as possible and within 2 weeks of the symptoms to prevent a major stroke.”)
Acute Mesenteric Ischemia.
Acute limb ischaemia. (“Signs and symptoms of acute limb ischaemia – the six Ps: •Pain •Pulseless •Pallor •Paraesthesia •Perishingly cold •Paralysis”).
Cervical rib.
Peripheral Arterial Occlusive Disease. (“The disease will only progress in one in four patients with intermittent claudication: therefore, unless the disease is very disabling for the patient, treatment is conservative. […] Investigations should include ankle–brachial pressure index (ABPI): this is typically <0.9 in patients with claudication; however, calcified vessels (typically in patients with diabetes) may result in an erroneously normal or high ABPI. […] Regular exercise has been shown to increase the claudication distance. In the minority of cases that do require intervention (i.e. severe short distance claudication not improving with exercise), angioplasty and bypass surgery are considered.”)
Venous ulcer. Marjolin’s ulcer. (“It is important to distinguish arterial from venous ulceration, as use of compression to treat the former type of ulcer is contraindicated.”)

April 14, 2018 Posted by | Books, Cancer/oncology, Gastroenterology, Medicine | Leave a comment

Structural engineering

“The purpose of the book is three-fold. First, I aim to help the general reader appreciate the nature of structure, the role of the structural engineer in man-made structures, and understand better the relationship between architecture and engineering. Second, I provide an overview of how structures work: how they stand up to the various demands made of them. Third, I give students and prospective students in engineering, architecture, and science access to perspectives and qualitative understanding of advanced modern structures — going well beyond the simple statics of most introductory texts. […] Structural engineering is an important part of almost all undergraduate courses in engineering. This book is novel in the use of ‘thought-experiments’ as a straightforward way of explaining some of the important concepts that students often find the most difficult. These include virtual work, strain energy, and maximum and minimum energy principles, all of which are basic to modern computational techniques. The focus is on gaining understanding without the distraction of mathematical detail. The book is therefore particularly relevant for students of civil, mechanical, aeronautical, and aerospace engineering but, of course, it does not cover all of the theoretical detail necessary for completing such courses.”

The above quote is from the book‘s preface. I gave the book 2 stars on goodreads, and I must say that I think David Muir Wood’s book in this series on a similar and closely overlapping topic, civil engineering, was just a significantly better book – if you’re planning on reading only one book on these topics, in my opinion you should pick Wood’s book. I have two main complaints against this book: There’s too much stuff about the aesthetic properties of structures, and the history- and development of the differences between architecture and engineering; and the author seems to think it’s no problem covering quite complicated topics with just analogies and thought experiments, without showing you any of the equations. As for the first point, I don’t really have any interest in aesthetics or architectural history; as for the second, I can handle math reasonably well, but I usually have trouble when people insist on hiding the equations from me and talking only ‘in images’. The absence of equations doesn’t mean the topic coverage is dumbed-down, much; it’s rather the case that the author is trying to cover the sort of material that we usually use mathematics to talk about, because this is the most efficient language to use, using different kinds of language; the problem is that things get lost in the translation. He got rid of the math, but not the complexity. The book does include many illustrations as well, including illustrations of some quite complicated topics and dynamics, but some of the things he talks about in the book are things you can’t illustrate well with images because you ‘run out of dimensions’ before you’ve handled all the relevant aspects/dynamics, an admission he himself makes in the book.

Anyway, the book is not terrible and there’s some interesting stuff in there. I’ve added a few more quotes and some links related to the book’s coverage below.

“All structures span a gap or a space of some kind and their primary role is to transmit the imposed forces safely. A bridge spans an obstruction like a road or a river. The roof truss of a house spans the rooms of the house. The fuselage of a jumbo jet spans between wheels of its undercarriage on the tarmac of an airport terminal and the self-weight, lift and drag forces in flight. The hull of a ship spans between the variable buoyancy forces caused by the waves of the sea. To be fit for purpose every structure has to cope with specific local conditions and perform inside acceptable boundaries of behaviour—which engineers call ‘limit states’. […] Safety is paramount in two ways. First, the risk of a structure totally losing its structural integrity must be very low—for example a building must not collapse or a ship break up. This maximum level of performance is called an ultimate limit state. If a structure should reach that state for whatever reason then the structural engineer tries to ensure that the collapse or break up is not sudden—that there is some degree of warning—but this is not always possible […] Second, structures must be able to do what they were built for—this is called serviceability or performance limit state. So for example a skyscraper building must not sway so much that it causes discomfort to the occupants, even if the risk of total collapse is still very small.”

“At its simplest force is a pull (tension) or a push (compression). […] There are three ways in which materials are strong in different combinations—pulling (tension), pushing (compression), and sliding (shear). Each is very important […] all intact structures have internal forces that balance the external forces acting on them. These external forces come from simple self-weight, people standing, sitting, walking, travelling across them in cars, trucks, and trains, and from the environment such as wind, water, and earthquakes. In that state of equilibrium it turns out that structures are naturally lazy—the energy stored in them is a minimum for that shape or form of structure. Form-finding structures are a special group of buildings that are allowed to find their own shape—subject to certain constraints. There are two classes—in the first, the form-finding process occurs in a model (which may be physical or theoretical) and the structure is scaled up from the model. In the second, the structure is actually built and then allowed to settle into shape. In both cases the structures are self-adjusting in that they move to a position in which the internal forces are in equilibrium and contain minimum energy. […] there is a big problem in using self-adjusting structures in practice. The movements under changing loads can make the structures unfit for purpose. […] Triangles are important in structural engineering because they are the simplest stable form of structure and you see them in all kinds of structures—whether form-finding or not. […] Other forms of pin jointed structure, such as a rectangle, will deform in shear as a mechanism […] unless it has diagonal bracing—making it triangular. […] bending occurs in part of a structure when the forces acting on it tend to make it turn or rotate—but it is constrained or prevented from turning freely by the way it is connected to the rest of the structure or to its foundations. The turning forces may be internal or external.”

“Energy is the capacity of a force to do work. If you stretch an elastic band it has an internal tension force resisting your pull. If you let go of one end the band will recoil and could inflict a sharp sting on your other hand. The internal force has energy or the capacity to do work because you stretched it. Before you let go the energy was potential; after you let go the energy became kinetic. Potential energy is the capacity to do work because of the position of something—in this case because you pulled the two ends of the band apart. […] A car at the top of a hill has the potential energy to roll down the hill if the brakes are released. The potential energy in the elastic band and in a structure has a specific name—it is called ‘strain energy’. Kinetic energy is due to movement, so when you let go of the band […] the potential energy is converted into kinetic energy. Kinetic energy depends on mass and velocity—so a truck can develop more kinetic energy than a small car. When a structure is loaded by a force then the structure moves in whatever way it can to ‘get out of the way’. If it can move freely it will do—just as if you push a car with the handbrake off it will roll forward. However, if the handbrake is on the car will not move, and an internal force will be set up between the point at which you are pushing and the wheels as they grip the road.”

“[A] rope hanging freely as a catenary has minimum energy and […] it can only resist one kind of force—tension. Engineers say that it has one degree of freedom. […] In brief, degrees of freedom are the independent directions in which a structure or any part of a structure can move or deform […] Movements along degrees of freedom define the shape and location of any object at a given time. Each part, each piece of a physical structure whatever its size is a physical object embedded in and connected to other objects […] similar objects which I will call its neighbours. Whatever its size each has the potential to move unless something stops it. Where it may move freely […] then no internal resisting force is created. […] where it is prevented from moving in any direction a reaction force is created with consequential internal forces in the structure. For example at a support to a bridge, where the whole bridge is normally stopped from moving vertically, then an external vertical reaction force develops which must be resisted by a set of internal forces that will depend on the form of the bridge. So inside the bridge structure each piece, however small or large, will move—but not freely. The neighbouring objects will get in the way […]. When this happens internal forces are created as the objects bump up against each other and we represent or model those forces along the pathways which are the local degrees of freedom. The structure has to be strong enough to resist these internal forces along these pathways.”

“The next question is ‘How do we find out how big the forces and movements are?’ It turns out that there is a whole class of structures where this is reasonably straightforward and these are the structures covered in elementary textbooks. Engineers call them ‘statically determinate’ […] For these structures we can find the sizes of the forces just by balancing the internal and external forces to establish equilibrium. […] Unfortunately many real structures can’t be fully explained in this way—they are ‘statically indeterminate‘. This is because whilst establishing equilibrium between internal and external forces is necessary it is not sufficient for finding all of the internal forces. […] The four-legged stool is statically indeterminate. You will begin to understand this if you have ever sat at a fourlegged wobbly table […] which has one leg shorter than the other three legs. There can be no force in that leg because there is no reaction from the ground. What is more, the opposite leg will have no internal force either because otherwise there would be a net turning moment about the line joining the other two legs. Thus the table is balanced on two legs—which is why it wobbles back and forth. […] each leg has one degree of freedom but we have only three ways of balancing them in the (x,y,z) directions. In mathematical terms, we have four unknown  variables (the internal forces) but only three equations (balancing equilibrium in three directions). It follows that there isn’t just one set of forces in equilibrium—indeed, there are many such sets.”

“[W]hen a structure is in equilibrium it has minimum strain energy. […] Strictly speaking, minimum strain energy as a criterion for equilibrium is [however] true only in specific circumstances. To understand this we need to look at the constitutive relations between forces and deformations or displacements. Strain energy is stored potential energy and that energy is the capacity to do work. The strain energy in a body is there because work has been done on it—a force moved through a distance. Hence in order to know the energy we must know how much displacement is caused by a given force. This is called a ‘constitutive relation’ and has the form ‘force equals a constitutive factor times a displacement’. The most common of these relationships is called ‘linear elastic’ where the force equals a simple numerical factor—called the stiffness—times the displacement […] The inverse of the stiffness is called flexibility”.

“Aeroplanes take off or ascend because the lift forces due to the forward motion of the plane exceed the weight […] In level flight or cruise the plane is neutrally buoyant and flies at a steady altitude. […] The structure of an aircraft consists of four sets of tubes: the fuselage, the wings, the tail, and the fin. For obvious reasons their weight needs to be as small as possible. […] Modern aircraft structures are semi-monocoque—meaning stressed skin but with a supporting frame. In other words the skin covering, which may be only a few millimetres thick, becomes part of the structure. […] In an overall sense, the lift and drag forces effectively act on the wings through centres of pressure. The wings also carry the weight of engines and fuel. During a typical flight, the positions of these centres of force vary along the wing—for example as fuel is used. The wings are balanced cantilevers fixed to the fuselage. Longer wings (compared to their width) produce greater lift but are also necessarily heavier—so a compromise is required.”

“When structures move quickly, in particular if they accelerate or decelerate, we have to consider […] the inertia force and the damping force. They occur, for example, as an aeroplane takes off and picks up speed. They occur in bridges and buildings that oscillate in the wind. As these structures move the various bits of the structure remain attached—perhaps vibrating in very complex patterns, but they remain joined together in a state of dynamic equilibrium. An inertia force results from an acceleration or deceleration of an object and is directly proportional to the weight of that object. […] Newton’s 2nd Law tells us that the magnitudes of these [inertial] forces are proportional to the rates of change of momentum. […] Damping arises from friction or ‘looseness’ between components. As a consequence, energy is dissipated into other forms such as heat and sound, and the vibrations get smaller. […] The kinetic energy of a structure in static equilibrium is zero, but as the structure moves its potential energy is converted into kinetic energy. This is because the total energy remains constant by the principle of the conservation of energy (the first law of thermodynamics). The changing forces and displacements along the degree of freedom pathways travel as a wave […]. The amplitude of the wave depends on the nature of the material and the connections between components.”

“For [a] structure to be safe the materials must be strong enough to resist the tension, the compression, and the shear. The strength of materials in tension is reasonably straightforward. We just need to know the limiting forces the material can resist. This is usually specified as a set of stresses. A stress is a force divided by a cross sectional area and represents a localized force over a small area of the material. Typical limiting tensile stresses are called the yield stress […] and the rupture stress—so we just need to know their numerical values from tests. Yield occurs when the material cannot regain its original state, and permanent displacements or strains occur. Rupture is when the material breaks or fractures. […] Limiting average shear stresses and maximum allowable stress are known for various materials. […] Strength in compression is much more difficult […] Modern practice using the finite element method enables us to make theoretical estimates […] but it is still approximate because of the simplifications necessary to do the computer analysis […]. One of the challenges to engineers who rely on finite element analysis is to make sure they understand the implications of the simplifications used.”

“Dynamic loads cause vibrations. One particularly dangerous form of vibration is called resonance […]. All structures have a natural frequency of free vibration. […] Resonance occurs if the frequency of an external vibrating force coincides with the natural frequency of the structure. The consequence is a rapid build up of vibrations that can become seriously damaging. […] Wind is a major source of vibrations. As it flows around a bluff body the air breaks away from the surface and moves in a circular motion like a whirlpool or whirlwind as eddies or vortices. Under certain conditions these vortices may break away on alternate sides, and as they are shed from the body they create pressure differences that cause the body to oscillate. […] a structure is in stable equilibrium when a small perturbation does not result in large displacements. A structure in dynamic equilibrium may oscillate about a stable equilibrium position. […] Flutter is dynamic and a form of wind-excited self-reinforcing oscillation. It occurs, as in the P-delta effect, because of changes in geometry. Forces that are no longer in line because of large displacements tend to modify those displacements of the structure, and these, in turn, modify the forces, and so on. In this process the energy input during a cycle of vibration may be greater than that lost by damping and so the amplitude increases in each cycle until destruction. It is a positive feed-back mechanism that amplifies the initial deformations, causes non-linearity, material plasticity and decreased stiffness, and reduced natural frequency. […] Regular pulsating loads, even very small ones, can cause other problems too through a phenomenon known as fatigue. The word is descriptive—under certain conditions the materials just get tired and crack. A normally ductile material like steel becomes brittle. Fatigue occurs under very small loads repeated many millions of times. All materials in all types of structures have a fatigue limit. […] Fatigue damage occurs deep in the material as microscopic bonds are broken. The problem is particularly acute in the heat affected zones of welded structures.”

“Resilience is the ability of a system to recover quickly from difficult conditions. […] One way of delivering a degree of resilience is to make a structure fail-safe—to mitigate failure if it happens. A household electrical fuse is an everyday example. The fuse does not prevent failure, but it does prevent extreme consequences such as an electrical fire. Damage-tolerance is a similar concept. Damage is any physical harm that reduces the value of something. A damage-tolerant structure is one in which any damage can be accommodated at least for a short time until it can be dealt with. […] human factors in failure are not just a matter of individuals’ slips, lapses, or mistakes but are also the result of organizational and cultural situations which are not easy to identify in advance or even at the time. Indeed, they may only become apparent in hindsight. It follows that another major part of safety is to design a structure so that it can be inspected, repaired, and maintained. Indeed all of the processes of creating a structure, whether conceiving, designing, making, or monitoring performance, have to be designed with sufficient resilience to accommodate unexpected events. In other words, safety is not something a system has (a property), rather it is something a system does (a performance). Providing resilience is a form of control—a way of managing uncertainties and risks.”

Antoni Gaudí. Heinz Isler. Frei Otto.
Eden Project.
Bending moment.
Shear and moment diagram.
Pyramid at Meidum.
Master builder.
John Smeaton.
Puddling (metallurgy).
Cast iron.
Isambard Kingdom Brunel.
Henry Bessemer. Bessemer process.
Institution of Structural Engineers.
Graphic statics (wiki doesn’t have an article on this topic under this name and there isn’t much here, but it looks like google has a lot if you’re interested).
Constitutive equation.
Deformation (mechanics).
Compatibility (mechanics).
Principle of Minimum Complementary Energy.
Direct stiffness method. Finite element method.
Hogging and sagging.
Centre of buoyancy. Metacentre (fluid mechanics). Angle of attack.
Box girder bridge.
D’Alembert’s principle.
S-n diagram.

April 11, 2018 Posted by | Books, Engineering, Physics | Leave a comment

Medical Statistics (I)

I was more than a little critical of the book in my review on goodreads, and the review is sufficiently detailed that I thought it would be worth including it in this post. Here’s what I wrote on goodreads (slightly edited to take full advantage of the better editing options on wordpress):

“The coverage is excessively focused on significance testing. The book also provides very poor coverage of model selection topics, where the authors not once but repeatedly recommend employing statistically invalid approaches to model selection (the authors recommend using hypothesis testing mechanisms to guide model selection, as well as using adjusted R-squared for model selection decisions – both of which are frankly awful ideas, for reasons which are obvious to people familiar with the field of model selection. “Generally, hypothesis testing is a very poor basis for model selection […] There is no statistical theory that supports the notion that hypothesis testing with a fixed α level is a basis for model selection.” “While adjusted R2 is useful as a descriptive statistic, it is not useful in model selection” – quotes taken directly from Burnham & Anderson’s book Model Selection and Multi-Model Inference: A Practical Information-Theoretic Approach).

The authors do not at any point in the coverage even mention the option of using statistical information criteria to guide model selection decisions, and frankly repeatedly recommend doing things which are known to be deeply problematic. The authors also cover material from Borenstein and Hedges’ meta-analysis text in the book, yet still somehow manage to give poor advice in the context of meta-analysis along similar lines (implicitly advising people to base model decisions within the context of whether to use fixed effects or random effects on the results of heterogeneity tests, despite this approach being criticized as problematic in the formerly mentioned text).

Basic and not terrible, but there are quite a few problems with this text.”

I’ll add a few more details about the above-mentioned problems before moving on to the main coverage. As for the model selection topic I refer specifically to my coverage of Burnham and Anderson’s book here and here – these guys spent a lot of pages talking about why you shouldn’t do what the authors of this book recommend, and I’m sort of flabbergasted medical statisticians don’t know this kind of stuff by now. To people who’ve read both these books, it’s not really in question who’s in the right here.

I believe part of the reason why I was very annoyed at the authors at times was that they seem to promote exactly a sort of blind unthinking hypothesis-testing approach to things that is unfortunately very common – the entire book is saturated with hypothesis testing stuff, which means that many other topics are woefully insufficiently covered. The meta-analysis example is probably quite illustrative; the authors spend multiple pages on study heterogeneity and how to deal with it, but the entire coverage there is centered around the discussion of a most-likely underpowered test, the result of which should perhaps in the best case scenario direct the researcher’s attention to topics he should be have been thinking carefully about from the very start of his data analysis. You don’t need to quote many words from Borenstein and Hedges (here’s a relevant link) to get to the heart of the matter here:

“It makes sense to use the fixed-effect model if two conditions are met. First, we believe that all the studies included in the analysis are functionally identical. Second, our goal is to compute the common effect size for the identified population, and not to generalize to other populations. […] this situation is relatively rare. […] By contrast, when the researcher is accumulating data from a series of studies that had been performed by researchers operating independently, it would be unlikely that all the studies were functionally equivalent. Typically, the subjects or interventions in these studies would have differed in ways that would have impacted on the results, and therefore we should not assume a common effect size. Therefore, in these cases the random-effects model is more easily justified than the fixed-effect model.

A report should state the computational model used in the analysis and explain why this model was selected. A common mistake is to use the fixed-effect model on the basis that there is no evidence of heterogeneity. As [already] explained […], the decision to use one model or the other should depend on the nature of the studies, and not on the significance of this test [because the test will often have low power anyway].”

Yet these guys spend their efforts here talking about a test that is unlikely to yield useful information and which if anything probably distracts the reader from the main issues at hand; are the studies functionally equivalent? Do we assume there’s one (‘true’) effect size, or many? What do those coefficients we’re calculating actually mean? The authors do in fact include a lot of cautionary notes about how to interpret the test, but in my view all this means is that they’re devoting critical pages to peripheral issues – and perhaps even reinforcing the view that the test is important, or why else would they spend so much effort on it? – rather than promote good thinking about the key topics at hand.

Anyway, enough of the critical comments. Below a few links related to the first chapter of the book, as well as some quotes.

Declaration of Helsinki.
Randomized controlled trial.
Minimization (clinical trials).
Blocking (statistics).
Informed consent.
Blinding (RCTs). (…related xkcd link).
Parallel study. Crossover trial.
Zelen’s design.
Superiority, equivalence, and non-inferiority trials.
Intention-to-treat concept: A review.
Case-control study. Cohort study. Nested case-control study. Cross-sectional study.
Bradford Hill criteria.
Research protocol.
Type 1 and type 2 errors.
Clinical audit. A few quotes on this topic:

“‘Clinical audit’ is a quality improvement process that seeks to improve the patient care and outcomes through systematic review of care against explicit criteria and the implementation of change. Aspects of the structures, processes and outcomes of care are selected and systematically evaluated against explicit criteria. […] The aim of audit is to monitor clinical practice against agreed best practice standards and to remedy problems. […] the choice of topic is guided by indications of areas where improvement is needed […] Possible topics [include] *Areas where a problem has been identified […] *High volume practice […] *High risk practice […] *High cost […] *Areas of clinical practice where guidelines or firm evidence exists […] The organization carrying out the audit should have the ability to make changes based on their findings. […] In general, the same methods of statistical analysis are used for audit as for research […] The main difference between audit and research is in the aim of the study. A clinical research study aims to determine what practice is best, whereas an audit checks to see that best practice is being followed.”

A few more quotes from the end of the chapter:

“In clinical medicine and in medical research it is fairly common to categorize a biological measure into two groups, either to aid diagnosis or to classify an outcome. […] It is often useful to categorize a measurement in this way to guide decision-making, and/or to summarize the data but doing this leads to a loss of information which in turn has statistical consequences. […] If a continuous variable is used for analysis in a research study, a substantially smaller sample size will be needed than if the same variable is categorized into two groups […] *Categorization of a continuous variable into two groups loses much data and should be avoided whenever possible *Categorization of a continuous variable into several groups is less problematic”

“Research studies require certain specific data which must be collected to fulfil the aims of the study, such as the primary and secondary outcomes and main factors related to them. Beyond these data there are often other data that could be collected and it is important to weigh the costs and consequences of not collecting data that will be needed later against the disadvantages of collecting too much data. […] collecting too much data is likely to add to the time and cost to data collection and processing, and may threaten the completeness and/or quality of all of the data so that key data items are threatened. For example if a questionnaire is overly long, respondents may leave some questions out or may refuse to fill it out at all.”

Stratified samples are used when fixed numbers are needed from particular sections or strata of the population in order to achieve balance across certain important factors. For example a study designed to estimate the prevalence of diabetes in different ethnic groups may choose a random sample with equal numbers of subjects in each ethnic group to provide a set of estimates with equal precision for each group. If a simple random sample is used rather than a stratified sample, then estimates for minority ethnic groups may be based on small numbers and have poor precision. […] Cluster samples may be chosen where individuals fall naturally into groups or clusters. For example, patients on a hospital wards or patients in a GP practice. If a sample is needed of these patients, it may be easier to list the clusters and then to choose a random sample of clusters, rather than to choose a random sample of the whole population. […] Cluster sampling is less efficient statistically than simple random sampling […] the ICC summarizes the extent of the ‘clustering effect’. When individuals in the same cluster are much more alike than individuals in different clusters with respect to an outcome, then the clustering effect is greater and the impact on the required sample size is correspondingly greater. In practice there can be a substantial effect on the sample size even when the ICC is quite small. […] As well as considering how representative a sample is, it is important […] to consider the size of the sample. A sample may be unbiased and therefore representative, but too small to give reliable estimates. […] Prevalence estimates from small samples will be imprecise and therefore may be misleading. […] The greater the variability of a measure, the greater the number of subjects needed in the sample to estimate it precisely. […] the power of a study is the ability of the study to detect a difference if one exists.”

April 9, 2018 Posted by | Books, Epidemiology, Medicine, Statistics | Leave a comment


Most of the words below are words which I encountered while reading the books The Fortune of War, The Surgeon’s Mate, In Your Dreams, and Who’s Afraid of Beowulf.

Pervenche. Intromit. Subfusc. Inspissated. Supple. Ukase. Commensal. Croft. Scantling. Compendious. Nympholept. Forfantery (an unsual – but very useful – link, for an unusual word). Trunnion. Hominy. Slubberdegullion. Lickerish. Brail. Grapnel. Swingle. Altumal.

Éclaircissement. Costiveness. Vang. Heady. Mort. Cingulum. Swingeing. Avifauna. Carminative. Accoucheur. Peccavi. Grommet. Woolding. Scow. Gibbous. Tierce. Burgoo. Tye. Inclement. Lobscouse.

Irrefragable. Gurnard. Bilaterian. Malmsey. Corbel. Jakes. Bonnet. Doddle. Rock dash. Purlin. Pillock. Graunch. Chirrup. Skive. Pelmet. Feckless. Pedalo. Howe. Tannin. Garnet.

Delate. Derisory. Saveloy. Flan. Quillon. Corvid. Hierophant. Thane. Laconic. Chthonic. Cowrie. Repique. Broch. Cheep. Carborundum. Shieling. Bothy. Meronymy. Meronomy. Mereology.


April 5, 2018 Posted by | Books, Language | Leave a comment


I actually think this was a really nice book, considering the format – I gave it four stars on goodreads. One of the things I noticed people didn’t like about it in the reviews is that it ‘jumps’ a bit in terms of topic coverage; it covers a wide variety of applications and analytical settings. I mostly don’t consider this a weakness of the book – even if occasionally it does get a bit excessive – and I can definitely understand the authors’ choice of approach; it’s sort of hard to illustrate the potential the analytical techniques described within this book have if you’re not allowed to talk about all the areas in which they have been – or could be gainfully – applied. A related point is that many people who read the book might be familiar with the application of these tools in specific contexts but have perhaps not thought about the fact that similar methods are applied in many other areas (and they might all of them be a bit annoyed the authors don’t talk more about computer science applications, or foodweb analyses, or infectious disease applications, or perhaps sociometry…). Most of the book is about graph-theory-related stuff, but a very decent amount of the coverage deals with applications, in a broad sense of the word at least, not theory. The discussion of theoretical constructs in the book always felt to me driven to a large degree by their usefulness in specific contexts.

I have covered related topics before here on the blog, also quite recently – e.g. there’s at least some overlap between this book and Holland’s book about complexity theory in the same series (I incidentally think these books probably go well together) – and as I found the book slightly difficult to blog as it was I decided against covering it in as much detail as I sometimes do when covering these texts – this means that I decided to leave out the links I usually include in posts like these.

Below some quotes from the book.

“The network approach focuses all the attention on the global structure of the interactions within a system. The detailed properties of each element on its own are simply ignored. Consequently, systems as different as a computer network, an ecosystem, or a social group are all described by the same tool: a graph, that is, a bare architecture of nodes bounded by connections. […] Representing widely different systems with the same tool can only be done by a high level of abstraction. What is lost in the specific description of the details is gained in the form of universality – that is, thinking about very different systems as if they were different realizations of the same theoretical structure. […] This line of reasoning provides many insights. […] The network approach also sheds light on another important feature: the fact that certain systems that grow without external control are still capable of spontaneously developing an internal order. […] Network models are able to describe in a clear and natural way how self-organization arises in many systems. […] In the study of complex, emergent, and self-organized systems (the modern science of complexity), networks are becoming increasingly important as a universal mathematical framework, especially when massive amounts of data are involved. […] networks are crucial instruments to sort out and organize these data, connecting individuals, products, news, etc. to each other. […] While the network approach eliminates many of the individual features of the phenomenon considered, it still maintains some of its specific features. Namely, it does not alter the size of the system — i.e. the number of its elements — or the pattern of interaction — i.e. the specific set of connections between elements. Such a simplified model is nevertheless enough to capture the properties of the system. […] The network approach [lies] somewhere between the description by individual elements and the description by big groups, bridging the two of them. In a certain sense, networks try to explain how a set of isolated elements are transformed, through a pattern of interactions, into groups and communities.”

“[T]he random graph model is very important because it quantifies the properties of a totally random network. Random graphs can be used as a benchmark, or null case, for any real network. This means that a random graph can be used in comparison to a real-world network, to understand how much chance has shaped the latter, and to what extent other criteria have played a role. The simplest recipe for building a random graph is the following. We take all the possible pair of vertices. For each pair, we toss a coin: if the result is heads, we draw a link; otherwise we pass to the next pair, until all the pairs are finished (this means drawing the link with a probability p = ½, but we may use whatever value of p). […] Nowadays [the random graph model] is a benchmark of comparison for all networks, since any deviations from this model suggests the presence of some kind of structure, order, regularity, and non-randomness in many real-world networks.”

“…in networks, topology is more important than metrics. […] In the network representation, the connections between the elements of a system are much more important than their specific positions in space and their relative distances. The focus on topology is one of its biggest strengths of the network approach, useful whenever topology is more relevant than metrics. […] In social networks, the relevance of topology means that social structure matters. […] Sociology has classified a broad range of possible links between individuals […]. The tendency to have several kinds of relationships in social networks is called multiplexity. But this phenomenon appears in many other networks: for example, two species can be connected by different strategies of predation, two computers by different cables or wireless connections, etc. We can modify a basic graph to take into account this multiplexity, e.g. by attaching specific tags to edges. […] Graph theory [also] allows us to encode in edges more complicated relationships, as when connections are not reciprocal. […] If a direction is attached to the edges, the resulting structure is a directed graph […] In these networks we have both in-degree and out-degree, measuring the number of inbound and outbound links of a node, respectively. […] in most cases, relations display a broad variation or intensity [i.e. they are not binary/dichotomous]. […] Weighted networks may arise, for example, as a result of different frequencies of interactions between individuals or entities.”

“An organism is […] the outcome of several layered networks and not only the deterministic result of the simple sequence of genes. Genomics has been joined by epigenomics, transcriptomics, proteomics, metabolomics, etc., the disciplines that study these layers, in what is commonly called the omics revolution. Networks are at the heart of this revolution. […] The brain is full of networks where various web-like structures provide the integration between specialized areas. In the cerebellum, neurons form modules that are repeated again and again: the interaction between modules is restricted to neighbours, similarly to what happens in a lattice. In other areas of the brain, we find random connections, with a more or less equal probability of connecting local, intermediate, or distant neurons. Finally, the neocortex — the region involved in many of the higher functions of mammals — combines local structures with more random, long-range connections. […] typically, food chains are not isolated, but interwoven in intricate patterns, where a species belongs to several chains at the same time. For example, a specialized species may predate on only one prey […]. If the prey becomes extinct, the population of the specialized species collapses, giving rise to a set of co-extinctions. An even more complicated case is where an omnivore species predates a certain herbivore, and both eat a certain plant. A decrease in the omnivore’s population does not imply that the plant thrives, because the herbivore would benefit from the decrease and consume even more plants. As more species are taken into account, the population dynamics can become more and more complicated. This is why a more appropriate description than ‘foodchains’ for ecosystems is the term foodwebs […]. These are networks in which nodes are species and links represent relations of predation. Links are usually directed (big fishes eat smaller ones, not the other way round). These networks provide the interchange of food, energy, and matter between species, and thus constitute the circulatory system of the biosphere.”

“In the cell, some groups of chemicals interact only with each other and with nothing else. In ecosystems, certain groups of species establish small foodwebs, without any connection to external species. In social systems, certain human groups may be totally separated from others. However, such disconnected groups, or components, are a strikingly small minority. In all networks, almost all the elements of the systems take part in one large connected structure, called a giant connected component. […] In general, the giant connected component includes not less than 90 to 95 per cent of the system in almost all networks. […] In a directed network, the existence of a path from one node to another does not guarantee that the journey can be made in the opposite direction. Wolves eat sheep, and sheep eat grass, but grass does not eat sheep, nor do sheep eat wolves. This restriction creates a complicated architecture within the giant connected component […] according to an estimate made in 1999, more than 90 per cent of the WWW is composed of pages connected to each other, if the direction of edges is ignored. However, if we take direction into account, the proportion of nodes mutually reachable is only 24 per cent, the giant strongly connected component. […] most networks are sparse, i.e. they tend to be quite frugal in connections. Take, for example, the airport network: the personal experience of every frequent traveller shows that direct flights are not that common, and intermediate stops are necessary to reach several destinations; thousands of airports are active, but each city is connected to less than 20 other cities, on average. The same happens in most networks. A measure of this is given by the mean number of connection of their nodes, that is, their average degree.”

“[A] puzzling contradiction — a sparse network can still be very well connected — […] attracted the attention of the Hungarian mathematicians […] Paul Erdős and Alfréd Rényi. They tackled it by producing different realizations of their random graph. In each of them, they changed the density of edges. They started with a very low density: less than one edge per node. It is natural to expect that, as the density increases, more and more nodes will be connected to each other. But what Erdős and Rényi found instead was a quite abrupt transition: several disconnected components coalesced suddenly into a large one, encompassing almost all the nodes. The sudden change happened at one specific critical density: when the average number of links per node (i.e. the average degree) was greater than one, then the giant connected component suddenly appeared. This result implies that networks display a very special kind of economy, intrinsic to their disordered structure: a small number of edges, even randomly distributed between nodes, is enough to generate a large structure that absorbs almost all the elements. […] Social systems seem to be very tightly connected: in a large enough group of strangers, it is not unlikely to find pairs of people with quite short chains of relations connecting them. […] The small-world property consists of the fact that the average distance between any two nodes (measured as the shortest path that connects them) is very small. Given a node in a network […], few nodes are very close to it […] and few are far from it […]: the majority are at the average — and very short — distance. This holds for all networks: starting from one specific node, almost all the nodes are at very few steps from it; the number of nodes within a certain distance increases exponentially fast with the distance. Another way of explaining the same phenomenon […] is the following: even if we add many nodes to a network, the average distance will not increase much; one has to increase the size of a network by several orders of magnitude to notice that the paths to new nodes are (just a little) longer. The small-world property is crucial to many network phenomena. […] The small-world property is something intrinsic to networks. Even the completely random Erdős-Renyi graphs show this feature. By contrast, regular grids do not display it. If the Internet was a chessboard-like lattice, the average distance between two routers would be of the order of 1,000 jumps, and the Net would be much slower [the authors note elsewhere that “The Internet is composed of hundreds of thousands of routers, but just about ten ‘jumps’ are enough to bring an information packet from one of them to any other.”] […] The key ingredient that transforms a structure of connections into a small world is the presence of a little disorder. No real network is an ordered array of elements. On the contrary, there are always connections ‘out of place’. It is precisely thanks to these connections that networks are small worlds. […] Shortcuts are responsible for the small-world property in many […] situations.”

“Body size, IQ, road speed, and other magnitudes have a characteristic scale: that is, an average value that in the large majority of cases is a rough predictor of the actual value that one will find. […] While height is a homogeneous magnitude, the number of social connection[s] is a heterogeneous one. […] A system with this feature is said to be scale-free or scale-invariant, in the sense that it does not have a characteristic scale. This can be rephrased by saying that the individual fluctuations with respect to the average are too large for us to make a correct prediction. […] In general, a network with heterogeneous connectivity has a set of clear hubs. When a graph is small, it is easy to find whether its connectivity is homogeneous or heterogeneous […]. In the first case, all the nodes have more or less the same connectivity, while in the latter it is easy to spot a few hubs. But when the network to be studied is very big […] things are not so easy. […] the distribution of the connectivity of the nodes of the […] network […] is the degree distribution of the graph. […] In homogeneous networks, the degree distribution is a bell curve […] while in heterogeneous networks, it is a power law […]. The power law implies that there are many more hubs (and much more connected) in heterogeneous networks than in homogeneous ones. Moreover, hubs are not isolated exceptions: there is a full hierarchy of nodes, each of them being a hub compared with the less connected ones.”

“Looking at the degree distribution is the best way to check if a network is heterogeneous or not: if the distribution is fat tailed, then the network will have hubs and heterogeneity. A mathematically perfect power law is never found, because this would imply the existence of hubs with an infinite number of connections. […] Nonetheless, a strongly skewed, fat-tailed distribution is a clear signal of heterogeneity, even if it is never a perfect power law. […] While the small-world property is something intrinsic to networked structures, hubs are not present in all kind of networks. For example, power grids usually have very few of them. […] hubs are not present in random networks. A consequence of this is that, while random networks are small worlds, heterogeneous ones are ultra-small worlds. That is, the distance between their vertices is relatively smaller than in their random counterparts. […] Heterogeneity is not equivalent to randomness. On the contrary, it can be the signature of a hidden order, not imposed by a top-down project, but generated by the elements of the system. The presence of this feature in widely different networks suggests that some common underlying mechanism may be at work in many of them. […] the Barabási–Albert model gives an important take-home message. A simple, local behaviour, iterated through many interactions, can give rise to complex structures. This arises without any overall blueprint”.

Homogamy, the tendency of like to marry like, is very strong […] Homogamy is a specific instance of homophily: this consists of a general trend of like to link to like, and is a powerful force in shaping social networks […] assortative mixing [is] a special form of homophily, in which nodes tend to connect with others that are similar to them in the number of connections. By contrast [when] high- and low-degree nodes are more connected to each other [it] is called disassortative mixing. Both cases display a form of correlation in the degrees of neighbouring nodes. When the degrees of neighbours are positively correlated, then the mixing is assortative; when negatively, it is disassortative. […] In random graphs, the neighbours of a given node are chosen completely at random: as a result, there is no clear correlation between the degrees of neighbouring nodes […]. On the contrary, correlations are present in most real-world networks. Although there is no general rule, most natural and technological networks tend to be disassortative, while social networks tend to be assortative. […] Degree assortativity and disassortativity are just an example of the broad range of possible correlations that bias how nodes tie to each other.”

“[N]etworks (neither ordered lattices nor random graphs), can have both large clustering and small average distance at the same time. […] in almost all networks, the clustering of a node depends on the degree of that node. Often, the larger the degree, the smaller the clustering coefficient. Small-degree nodes tend to belong to well-interconnected local communities. Similarly, hubs connect with many nodes that are not directly interconnected. […] Central nodes usually act as bridges or bottlenecks […]. For this reason, centrality is an estimate of the load handled by a node of a network, assuming that most of the traffic passes through the shortest paths (this is not always the case, but it is a good approximation). For the same reason, damaging central nodes […] can impair radically the flow of a network. Depending on the process one wants to study, other definitions of centrality can be introduced. For example, closeness centrality computes the distance of a node to all others, and reach centrality factors in the portion of all nodes that can be reached in one step, two steps, three steps, and so on.”

“Domino effects are not uncommon in foodwebs. Networks in general provide the backdrop for large-scale, sudden, and surprising dynamics. […] most of the real-world networks show a doubled-edged kind of robustness. They are able to function normally even when a large fraction of the network is damaged, but suddenly certain small failures, or targeted attacks, bring them down completely. […] networks are very different from engineered systems. In an airplane, damaging one element is enough to stop the whole machine. In order to make it more resilient, we have to use strategies such as duplicating certain pieces of the plane: this makes it almost 100 per cent safe. In contrast, networks, which are mostly not blueprinted, display a natural resilience to a broad range of errors, but when certain elements fail, they collapse. […] A random graph of the size of most real-world networks is destroyed after the removal of half of the nodes. On the other hand, when the same procedure is performed on a heterogeneous network (either a map of a real network or a scale-free model of a similar size), the giant connected component resists even after removing more than 80 per cent of the nodes, and the distance within it is practically the same as at the beginning. The scene is different when researchers simulate a targeted attack […] In this situation the collapse happens much faster […]. However, now the most vulnerable is the second: while in the homogeneous network it is necessary to remove about one-fifth of its more connected nodes to destroy it, in the heterogeneous one this happens after removing the first few hubs. Highly connected nodes seem to play a crucial role, in both errors and attacks. […] hubs are mainly responsible for the overall cohesion of the graph, and removing a few of them is enough to destroy it.”

“Studies of errors and attacks have shown that hubs keep different parts of a network connected. This implies that they also act as bridges for spreading diseases. Their numerous ties put them in contact with both infected and healthy individuals: so hubs become easily infected, and they infect other nodes easily. […] The vulnerability of heterogeneous networks to epidemics is bad news, but understanding it can provide good ideas for containing diseases. […] if we can immunize just a fraction, it is not a good idea to choose people at random. Most of the times, choosing at random implies selecting individuals with a relatively low number of connections. Even if they block the disease from spreading in their surroundings, hubs will always be there to put it back into circulation. A much better strategy would be to target hubs. Immunizing hubs is like deleting them from the network, and the studies on targeted attacks show that eliminating a small fraction of hubs fragments the network: thus, the disease will be confined to a few isolated components. […] in the epidemic spread of sexually transmitted diseases the timing of the links is crucial. Establishing an unprotected link with a person before they establish an unprotected link with another person who is infected is not the same as doing so afterwards.”

April 3, 2018 Posted by | Biology, Books, Ecology, Engineering, Epidemiology, Genetics, Mathematics, Statistics | Leave a comment

Promoting the unknown…


March 31, 2018 Posted by | Music | Leave a comment

The Internet of Things


Some links to stuff he talks about in the lecture:

The Internet of Things: making the most of the Second Digital Revolution – A report by the UK Government Chief Scientific Adviser.
South–North Water Transfer Project.
FDA approves first smart pill that tracks drug regimen compliance from the inside.
The Internet of Things (IoT)* units installed base by category from 2014 to 2020.
Share of the IoT market by sub-sector worldwide in 2017.
San Diego to Cover Half the City with Intelligent Streetlights.
IPv4 and IPv6 (specifically, he talks a little about the IPv4 address space problem).
General Data Protection Regulation (GDPR).
Shodan (website).
Mirai botnet.
Gait analysis.
Website reveals 73,000 unprotected security cameras with default passwords. (This was just an example link – it’s unclear if the site he used to illustrate his point in that part of the lecture was actually Insecam, but he does talk about the widespread use of default passwords and related security implications during the lecture).
Strava’s fitness heatmaps are a ‘potential catastrophe’.
‘Secure by Design’ (a very recently published proposed UK IoT code of practice).

March 26, 2018 Posted by | Computer science, Engineering, Lectures | Leave a comment


i. “La vérité ne se possède pas, elle se cherche.” (‘You cannot possess the truth, you can only search for it.’ – Albert Jacquard)

ii. “Some physicist might believe that ultimately, we will be able to explain everything. To me, that is utterly stupid […] It seems to me that, if you accept evolution, you can still not expect your dog to get up and start talking German. And that’s because your dog is not genetically programmed to do that. We are human animals, and we are equally bound. There are whole realms of discourse out there that we cannot reach, by definition. There are always going to be limits beyond which we cannot go. Knowing that they are there, you can always hope to move a little closer – but that’s all.” (James M. Buchanan)

iii. “Physics is a wrong tool to describe living systems.” (Donald A. Glaser)

iv. “In the seventeenth century Cartesians refused to accept Newton’s attraction because they could not accept a force that was not transmitted by a medium. Even now many physicists have not yet learned that they should adjust their ideas to the observed reality rather than the other way round.” (Nico van Kampen)

v. “…the human brain is itself a part of nature, fanned into existence by billions of years of sunshine acting on the molecules of the Earth. It is not perfectible in the immediate future, even if biologists should wish to alter the brain […]. What men make of the universe at large is a product of what they can see of it and of their own human nature.” (Nigel Calder)

vi. “If you torture the data enough, nature will always confess.” (Ronald Coase)

vii. “If economists wished to study the horse, they wouldn’t go and look at horses. They’d sit in their studies and say to themselves, “what would I do if I were a horse?”” (-ll-)

viii. “Nothing is as simple as we hope it will be.” (Jim Horning)

ix. “There’s an old saying in politics: anyone dumb enough to run for the job probably is too stupid to have it.” (Ralph Klein)

x. “I never felt the need to do what everyone else did. And I wasn’t troubled by the fact that other people were doing other things.” (Saul Leiter)

xi. “Think wrongly, if you please, but in all cases think for yourself.” (Doris Lessing)

xii. “All political movements are like this — we are in the right, everyone else is in the wrong. The people on our own side who disagree with us are heretics, and they start becoming enemies. With it comes an absolute conviction of your own moral superiority.” (-ll-)

xiii. “An ideological movement is a collection of people many of whom could hardly bake a cake, fix a car, sustain a friendship or a marriage, or even do a quadratic equation, yet they believe they know how to rule the world.” (Kenneth Minogue)

xiv. “The natural order of organisms is a divergent inclusive hierarchy and that hierarchy is recognized by taxic homology.” (Alec Panchen)

xv. “Don Kayman was too good a scientist to confuse his hopes with observations. He would report what he found. But he knew what he wanted to find.” (Frederik Pohl)

xvi. “A barbarian is not aware that he is a barbarian.” (Jack Vance)

xvii. “I do not care to listen; obloquy injures my self-esteem and I am skeptical of praise.” (-ll-)

xviii. “People can be deceived by appeals intended to destroy democracy in the name of democracy.” (Robert A. Dahl)

xix. “If we gather more and more data and establish more and more associations, […] we will not finally find that we know something. We will simply end up having more and more data and larger sets of correlations.” (-ll-)

xx. “Thoughts convince thinkers; for this reason, thoughts convince seldom.” (Karlheinz Deschner)

March 24, 2018 Posted by | Quotes/aphorisms | Leave a comment

The Computer

Below some quotes and links related to the book‘s coverage:

“At the heart of every computer is one or more hardware units known as processors. A processor controls what the computer does. For example, it will process what you type in on your computer’s keyboard, display results on its screen, fetch web pages from the Internet, and carry out calculations such as adding two numbers together. It does this by ‘executing’ a computer program that details what the computer should do […] Data and programs are stored in two storage areas. The first is known as main memory and has the property that whatever is stored there can be retrieved very quickly. Main memory is used for transient data – for example, the result of a calculation which is an intermediate result in a much bigger calculation – and is also used to store computer programs while they are being executed. Data in main memory is transient – it will disappear when the computer is switched off. Hard disk memory, also known as file storage or backing storage, contains data that are required over a period of time. Typical entities that are stored in this memory include files of numerical data, word-processed documents, and spreadsheet tables. Computer programs are also stored here while they are not being executed. […] There are a number of differences between main memory and hard disk memory. The first is the retrieval time. With main memory, an item of data can be retrieved by the processor in fractions of microseconds. With file-based memory, the retrieval time is much greater: of the order of milliseconds. The reason for this is that main memory is silicon-based […] hard disk memory is usually mechanical and is stored on the metallic surface of a disk, with a mechanical arm retrieving the data. […] main memory is more expensive than file-based memory”.

The Internet is a network of computers – strictly, it is a network that joins up a number of networks. It carries out a number of functions. First, it transfers data from one computer to another computer […] The second function of the Internet is to enforce reliability. That is, to ensure that when errors occur then some form of recovery process happens; for example, if an intermediate computer fails then the software of the Internet will discover this and resend any malfunctioning data via other computers. A major component of the Internet is the World Wide Web […] The web […] uses the data-transmission facilities of the Internet in a specific way: to store and distribute web pages. The web consists of a number of computers known as web servers and a very large number of computers known as clients (your home PC is a client). Web servers are usually computers that are more powerful than the PCs that are normally found in homes or those used as office computers. They will be maintained by some enterprise and will contain individual web pages relevant to that enterprise; for example, an online book store such as Amazon will maintain web pages for each item it sells. The program that allows users to access the web is known as a browser. […] A part of the Internet known as the Domain Name System (usually referred to as DNS) will figure out where the page is held and route the request to the web server holding the page. The web server will then send the page back to your browser which will then display it on your computer. Whenever you want another page you would normally click on a link displayed on that page and the process is repeated. Conceptually, what happens is simple. However, it hides a huge amount of detail involving the web discovering where pages are stored, the pages being located, their being sent, the browser reading the pages and interpreting how they should be displayed, and eventually the browser displaying the pages. […] without one particular hardware advance the Internet would be a shadow of itself: this is broadband. This technology has provided communication speeds that we could not have dreamed of 15 years ago. […] Typical broadband speeds range from one megabit per second to 24 megabits per second, the lower rate being about 20 times faster than dial-up rates.”

“A major idea I hope to convey […] is that regarding the computer as just the box that sits on your desk, or as a chunk of silicon that is embedded within some device such as a microwave, is only a partial view. The Internet – or rather broadband access to the Internet – has created a gigantic computer that has unlimited access to both computer power and storage to the point where even applications that we all thought would never migrate from the personal computer are doing just that. […] the Internet functions as a series of computers – or more accurately computer processors – carrying out some task […]. Conceptually, there is little difference between these computers and [a] supercomputer, the only difference is in the details: for a supercomputer the communication between processors is via some internal electronic circuit, while for a collection of computers working together on the Internet the communication is via external circuits used for that network.”

“A computer will consist of a number of electronic circuits. The most important is the processor: this carries out the instructions that are contained in a computer program. […] There are a number of individual circuit elements that make up the computer. Thousands of these elements are combined together to construct the computer processor and other circuits. One basic element is known as an And gate […]. This is an electrical circuit that has two binary inputs A and B and a single binary output X. The output will be one if both the inputs are one and zero otherwise. […] the And gate is only one example – when some action is required, for example adding two numbers together, [the different circuits] interact with each other to carry out that action. In the case of addition, the two binary numbers are processed bit by bit to carry out the addition. […] Whatever actions are taken by a program […] the cycle is the same; an instruction is read into the processor, the processor decodes the instruction, acts on it, and then brings in the next instruction. So, at the heart of a computer is a series of circuits and storage elements that fetch and execute instructions and store data and programs.”

“In essence, a hard disk unit consists of one or more circular metallic disks which can be magnetized. Each disk has a very large number of magnetizable areas which can either represent zero or one depending on the magnetization. The disks are rotated at speed. The unit also contains an arm or a number of arms that can move laterally and which can sense the magnetic patterns on the disk. […] When a processor requires some data that is stored on a hard disk […] then it issues an instruction to find the file. The operating system – the software that controls the computer – will know where the file starts and ends and will send a message to the hard disk to read the data. The arm will move laterally until it is over the start position of the file and when the revolving disk passes under the arm the magnetic pattern that represents the data held in the file is read by it. Accessing data on a hard disk is a mechanical process and usually takes a small number of milliseconds to carry out. Compared with the electronic speeds of the computer itself – normally measured in fractions of a microsecond – this is incredibly slow. Because disk access is slow, systems designers try to minimize the amount of access required to files. One technique that has been particularly effective is known as caching. It is, for example, used in web servers. Such servers store pages that are sent to browsers for display. […] Caching involves placing the frequently accessed pages in some fast storage medium such as flash memory and keeping the remainder on a hard disk.”

“The first computers had a single hardware processor that executed individual instructions. It was not too long before researchers started thinking about computers that had more than one processor. The simple theory here was that if a computer had n processors then it would be n times faster. […] it is worth debunking this notion. If you look at many classes of problems […], you see that a strictly linear increase in performance is not achieved. If a problem that is solved by a single computer is solved in 20 minutes, then you will find a dual processor computer solving it in perhaps 11 minutes. A 3-processor computer may solve it in 9 minutes, and a 4-processor computer in 8 minutes. There is a law of diminishing returns; often, there comes a point when adding a processor slows down the computation. What happens is that each processor needs to communicate with the others, for example passing on the result of a computation; this communicational overhead becomes bigger and bigger as you add processors to the point when it dominates the amount of useful work that is done. The sort of problems where they are effective is where a problem can be split up into sub-problems that can be solved almost independently by each processor with little communication.”

Symmetric encryption methods are very efficient and can be used to scramble large files or long messages being sent from one computer to another. Unfortunately, symmetric techniques suffer from a major problem: if there are a number of individuals involved in a data transfer or in reading a file, each has to know the same key. This makes it a security nightmare. […] public key cryptography removed a major problem associated with symmetric cryptography: that of a large number of keys in existence some of which may be stored in an insecure way. However, a major problem with asymmetric cryptography is the fact that it is very inefficient (about 10,000 times slower than symmetric cryptography): while it can be used for short messages such as email texts, it is far too inefficient for sending gigabytes of data. However, […] when it is combined with symmetric cryptography, asymmetric cryptography provides very strong security. […] One very popular security scheme is known as the Secure Sockets Layer – normally shortened to SSL. It is based on the concept of a one-time pad. […] SSL uses public key cryptography to communicate the randomly generated key between the sender and receiver of a message. This key is only used once for the data interchange that occurs and, hence, is an electronic analogue of a one-time pad. When each of the parties to the interchange has received the key, they encrypt and decrypt the data employing symmetric cryptography, with the generated key carrying out these processes. […] There is an impression amongst the public that the main threats to security and to privacy arise from technological attack. However, the threat from more mundane sources is equally high. Data thefts, damage to software and hardware, and unauthorized access to computer systems can occur in a variety of non-technical ways: by someone finding computer printouts in a waste bin; by a window cleaner using a mobile phone camera to take a picture of a display containing sensitive information; by an office cleaner stealing documents from a desk; by a visitor to a company noting down a password written on a white board; by a disgruntled employee putting a hammer through the main server and the backup server of a company; or by someone dropping an unencrypted memory stick in the street.”

“The basic architecture of the computer has remained unchanged for six decades since IBM developed the first mainframe computers. It consists of a processor that reads software instructions one by one and executes them. Each instruction will result in data being processed, for example by being added together; and data being stored in the main memory of the computer or being stored on some file-storage medium; or being sent to the Internet or to another computer. This is what is known as the von Neumann architecture; it was named after John von Neumann […]. His key idea, which still holds sway today, is that in a computer the data and the program are both stored in the computer’s memory in the same address space. There have been few challenges to the von Neumann architecture.”

[A] ‘neural network‘ […] consists of an input layer that can sense various signals from some environment […]. In the middle (hidden layer), there are a large number of processing elements (neurones) which are arranged into sub-layers. Finally, there is an output layer which provides a result […]. It is in the middle layer that the work is done in a neural computer. What happens is that the network is trained by giving it examples of the trend or item that is to be recognized. What the training does is to strengthen or weaken the connections between the processing elements in the middle layer until, when combined, they produce a strong signal when a new case is presented to them that matches the previously trained examples and a weak signal when an item that does not match the examples is encountered. Neural networks have been implemented in hardware, but most of the implementations have been via software where the middle layer has been implemented in chunks of code that carry out the learning process. […] although the initial impetus was to use ideas in neurobiology to develop neural architectures based on a consideration of processes in the brain, there is little resemblance between the internal data and software now used in commercial implementations and the human brain.”


Byte. Bit.
Moore’s law.
Computer program.
Programming language. High-level programming language. Low-level programming language.
Zombie (computer science).
Cloud computing.
Instructions per second.
Fetch-execute cycle.
Grace Hopper. Software Bug.
Transistor. Integrated circuit. Very-large-scale integration. Wafer (electronics). Photomask.
Read-only memory (ROM). Read-write memory (RWM). Bus (computing). Address bus. Programmable read-only memory (PROM). Erasable programmable read-only memory (EPROM). Electrically erasable programmable read-only memory (EEPROM). Flash memory. Dynamic random-access memory (DRAM). Static random-access memory (static RAM/SRAM).
Hard disc.
Wireless communication.
Radio-frequency identification (RFID).
NP-hardness. Set partition problem. Bin packing problem.
Cray X-MP. Beowulf cluster.
Vector processor.
Denial-of-service attack. Melissa (computer virus). Malware. Firewall (computing). Logic bomb. Fork bomb/rabbit virus. Cryptography. Caesar cipher. Social engineering (information security).
Application programming interface.
Data mining. Machine translation. Machine learning.
Functional programming.
Quantum computing.

March 19, 2018 Posted by | Books, Computer science, Cryptography, Engineering | Leave a comment

Marine Biology (II)

Below some observations and links related to the second half of the book’s coverage:

[C]oral reefs occupy a very small proportion of the planet’s surface – about 284,000 square kilometres – roughly equivalent to the size of Italy [yet they] are home to an incredibly diversity of marine organisms – about a quarter of all marine species […]. Coral reef systems provide food for hundreds of millions of people, with about 10 per cent of all fish consumed globally caught on coral reefs. […] Reef-building corals thrive best at sea temperatures above about 23°C and few exist where sea temperatures fall below 18°C for significant periods of time. Thus coral reefs are absent at tropical latitudes where upwelling of cold seawater occurs, such as the west coasts of South America and Africa. […] they are generally restricted to areas of clear water less than about 50 metres deep. Reef-building corals are very intolerant of any freshening of seawater […] and so do not occur in areas exposed to intermittent influxes of freshwater, such as near the mouths of rivers, or in areas where there are high amounts of rainfall run-off. This is why coral reefs are absent along much of the tropical Atlantic coast of South America, which is exposed to freshwater discharge from the Amazon and Orinoco Rivers. Finally, reef-building corals flourish best in areas with moderate to high wave action, which keeps the seawater well aerated […]. Spectacular and productive coral reef systems have developed in those parts of the Global Ocean where this special combination of physical conditions converges […] Each colony consists of thousands of individual animals called polyps […] all reef-building corals have entered into an intimate relationship with plant cells. The tissues lining the inside of the tentacles and stomach cavity of the polyps are packed with photosynthetic cells called zooxanthellae, which are photosynthetic dinoflagellates […] Depending on the species, corals receive anything from about 50 per cent to 95 per cent of their food from their zooxanthellae. […] Healthy coral reefs are very productive marine systems. This is in stark contrast to the nutrient-poor and unproductive tropical waters adjacent to reefs. Coral reefs are, in general, roughly one hundred times more productive than the surrounding environment”.

“Overfishing constitutes a significant threat to coral reefs at this time. About an eighth of the world’s population – roughly 875 million people – live within 100 kilometres of a coral reef. Most of the people live in developing countries and island nations and depend greatly on fish obtained from coral reefs as a food source. […] Some of the fishing practices are very harmful. Once the large fish are removed from a coral reef, it becomes increasingly more difficult to make a living harvesting the more elusive and lower-value smaller fish that remain. Fishers thus resort to more destructive techniques such as dynamiting parts of the reef and scooping up the dead and stunned fish that float to the surface. People capturing fish for the tropical aquarium trade will often poison parts of the reef with sodium cyanide which paralyses the fish, making them easier to catch. An unfortunate side effect of this practice is that the poison kills corals. […] Coral reefs have only been seriously studied since the 1970s, which in most cases was well after human impacts had commenced. This makes it difficult to define what might actually constitute a ‘natural’ and healthy coral reef system, as would have existed prior to extensive human impacts.”

“Mangrove is a collective term applied to a diverse group of trees and scrubs that colonize protected muddy intertidal areas in tropical and subtropical regions, creating mangrove forests […] Mangroves are of great importance from a human perspective. The sheltered waters of a mangrove forest provide important nursery areas for juvenile fish, crabs, and shrimp. Many commercial fisheries depend on the existence of healthy mangrove forests, including blue crab, shrimp, spiny lobster, and mullet fisheries. Mangrove forests also stabilize the foreshore and protect the adjacent land from erosion, particularly from the effects of large storms and tsunamis. They also act as biological filters by removing excess nutrients and trapping sediment from land run-off before it enters the coastal environment, thereby protecting other habitats such as seagrass meadows and coral reefs. […] [However] mangrove forests are disappearing rapidly. In a twenty-year period between 1980 and 2000 the area of mangrove forest globally declined from around 20 million hectares to below 15 million hectares. In some specific regions the rate of mangrove loss is truly alarming. For example, Puerto Rico lost about 89 per cent of its mangrove forests between 1930 and 1985, while the southern part of India lost about 96 per cent of its mangroves between 1911 and 1989.”

“[A]bout 80 per cent of the entire volume of the Global Ocean, or roughly one billion cubic kilometres, consists of seawater with depths greater than 1,000 metres […] The deep ocean is a permanently dark environment devoid of sunlight, the last remnants of which cannot penetrate much beyond 200 metres in most parts of the Global Ocean, and no further than 800 metres or so in even the clearest oceanic waters. The only light present in the deep ocean is of biological origin […] Except in a few very isolated places, the deep ocean is a permanently cold environment, with sea temperatures ranging from about 2° to 4°C. […] Since there is no sunlight, there is no plant life, and thus no primary production of organic matter by photosynthesis. The base of the food chain in the deep ocean consists mostly of a ‘rain’ of small particles of organic material sinking down through the water column from the sunlit surface waters of the ocean. This reasonably constant rain of organic material is supplemented by the bodies of large fish and marine mammals that sink more rapidly to the bottom following death, and which provide sporadic feasts for deep-ocean bottom dwellers. […] Since food is a scarce commodity for deep-ocean fish, full advantage must be taken of every meal encountered. This has resulted in a number of interesting adaptations. Compared to fish in the shallow ocean, many deep-ocean fish have very large mouths capable of opening very wide, and often equipped with numerous long, sharp, inward-pointing teeth. […] These fish can capture and swallow whole prey larger than themselves so as not to pass up a rare meal simply because of its size. These fish also have greatly extensible stomachs to accommodate such meals.”

“In the pelagic environment of the deep ocean, animals must be able to keep themselves within an appropriate depth range without using up energy in their food-poor habitat. This is often achieved by reducing the overall density of the animal to that of seawater so that it is neutrally buoyant. Thus the tissues and bones of deep-sea fish are often rather soft and watery. […] There is evidence that deep-ocean organisms have developed biochemical adaptations to maintain the functionality of their cell membranes under pressure, including adjusting the kinds of lipid molecules present in membranes to retain membrane fluidity under high pressure. High pressures also affect protein molecules, often preventing them from folding up into the correct shapes for them to function as efficient metabolic enzymes. There is evidence that deep-ocean animals have evolved pressure-resistant variants of common enzymes that mitigate this problem. […] The pattern of species diversity of the deep-ocean benthos appears to differ from that of other marine communities, which are typically dominated by a small number of abundant and highly visible species which overshadow the presence of a large number of rarer and less obvious species which are also present. In the deep-ocean benthic community, in contrast, no one group of species tends to dominate, and the community consists of a high number of different species all occurring in low abundance. […] In general, species diversity increases with the size of a habitat – the larger the area of a habitat, the more species that have developed ways to successfully live in that habitat. Since the deep-ocean bottom is the largest single habitat on the planet, it follows that species diversity would be expected to be high.”

Seamounts represent a special kind of biological hotspot in the deep ocean. […] In contrast to the surrounding flat, soft-bottomed abyssal plains, seamounts provide a complex rocky platform that supports an abundance of organisms that are distinct from the surrounding deep-ocean benthos. […] Seamounts support a great diversity of fish species […] This [has] triggered the creation of new deep-ocean fisheries focused on seamounts. […] [However these species are generally] very slow-growing and long-lived and mature at a late age, and thus have a low reproductive potential. […] Seamount fisheries have often been described as mining operations rather than sustainable fisheries. They typically collapse within a few years of the start of fishing and the trawlers then move on to other unexplored seamounts to maintain the fishery. The recovery of localized fisheries will inevitably be very slow, if achievable at all, because of the low reproductive potential of these deep-ocean fish species. […] Comparisons of ‘fished’ and ‘unfished’ seamounts have clearly shown the extent of habitat damage and loss of species diversity brought about by trawl fishing, with the dense coral habitats reduced to rubble over much of the area investigated. […] Unfortunately, most seamounts exist in areas beyond national jurisdiction, which makes it very difficult to regulate fishing activities on them, although some efforts are underway to establish international treaties to better manage and protect seamount ecosystems.”

“Hydrothermal vents are unstable and ephemeral features of the deep ocean. […] The lifespan of a typical vent is likely in the order of tens of years. Thus the rich communities surrounding vents have a very limited lifespan. Since many vent animals can live only near vents, and the distance between vent systems can be hundreds to thousands of kilometres, it is a puzzle as to how vent animals escape a dying vent and colonize other distant vents or newly created vents. […] Hydrothermal vents are [however] not the only source of chemical-laden fluids supporting unique chemosynthetic-based communities in the deep ocean. Hydrogen sulphide and methane also ooze from the ocean buttom at some locations at temperatures similar to the surrounding seawater. These so-called ‘cold seeps‘ are often found along continental margins […] The communities associated with cold seeps are similar to hydrothermal vent communities […] Cold seeps appear to be more permanent sources of fluid compared to the ephemeral nature of hot water vents.”

“Seepage of crude oil into the marine environment occurs naturally from oil-containing geological formations below the seabed. It is estimated that around 600,000 tonnes of crude oil seeps into the marine environment each year, which represents almost half of all the crude oil entering the oceans. […] The human activities associated with exploring for and producing oil result in the release on average of an estimated 38,000 tonnes of crude oil into the oceans each year, which is about 6 per cent of the total anthropogenic input of oil into the oceans worldwide. Although small in comparison to natural seepage, crude oil pollution from this source can cause serious damage to coastal ecosystems because it is released near the coast and sometimes in very large, concentrated amounts. […] The transport of oil and oil products around the globe in tankers results in the release of about 150,000 tonnes of oil worldwide each year on average, or about 22 per cent of the total anthropogenic input. […] About 480,000 tonnes of oil make their way into the marine environment each year worldwide from leakage associated with the consumption of oil-derived products in cars and trucks, and to a lesser extent in boats. Oil lost from the operation of cars and trucks collects on paved urban areas from where it is washed off into streams and rivers, and from there into the oceans. Surprisingly, this represents the most significant source of human-derived oil pollution into the marine environment – about 72 per cent of the total. Because it is a very diffuse source of pollution, it is the most difficult to control.”

“Today it has been estimated that virtually all of the marine food resources in the Mediterranean sea have been reduced to less than 50 per cent of their original abundance […] The greatest impact has been on the larger predatory fish, which were the first to be targeted by fishers. […] It is estimated that, collectively, the European fish stocks of today are just one-tenth of their size in 1900. […] In 1950 the total global catch of marine seafood was just less than twenty million tonnes fresh weight. This increased steadily and rapidly until by the late 1980s more than eighty million tonnes were being taken each year […] Starting in the early 1990s, however, yields began to show signs of levelling off. […] By far the most heavily exploited marine fishery in the world is the Peruvian anchoveta (Engraulis ringens) fishery, which can account for 10 per cent or more of the global marine catch of seafood in any particular year. […] The anchoveta is a very oily fish, which makes it less desirable for direct consumption by humans. However, the high oil content makes it ideal for the production of fish meal and fish oil […] the demand for fish meal and fish oil is huge and about a third of the entire global catch of fish is converted into these products rather than consumed directly by humans. Feeding so much fish protein to livestock comes with a considerable loss of potential food energy (around 25 per cent) compared to if it was eaten directly by humans. This could be viewed as a potential waste of available energy for a rapidly growing human population […] around 90 per cent of the fish used to produce fish meal and oil is presently unpalatable to most people and thus unmarketable in large quantities as a human food”.

“On heavily fished areas of the continental shelves, the same parts of the sea floor can be repeatedly trawled many times per year. Such intensive bottom trawling causes great cumulative damage to seabed habitats. The trawls scrape and pulverize rich and complex bottom habitats built up over centuries by living organisms such as tube worms, cold-water corals, and oysters. These habitats are eventually reduced to uniform stretches of rubble and sand. For all intents and purposes these areas are permanently altered and become occupied by a much changed and much less rich community adapted to frequent disturbance.”

“The eighty million tonnes or so of marine seafood caught each year globally equates to about eleven kilograms of wild-caught marine seafood per person on the planet. […] What is perfectly clear […] on the basis of theory backed up by real data on marine fish catches, is that marine fisheries are now fully exploited and that there is little if any headroom for increasing the amount of wild-caught fish humans can extract from the oceans to feed a burgeoning human population. […] This conclusion is solidly supported by the increasingly precarious state of global marine fishery resources. The most recent information from the Food and Agriculture Organization of the United Nations (The State of World Fisheries and Aquaculture 2010) shows that over half (53 per cent of all fish stocks are fully exploited – their current catches are at or close to their maximum sustainable levels of production and there is no scope for further expansion. Another 32 per cent are overexploited and in decline. Of the remaining 15 per cent of stocks, 12 per cent are considered moderately exploited and only 3 per cent underexploited. […] in the mid 1970s 40 per cent of all fish stocks were in [the moderately exploited or unexploited] category as opposed to around 15 per cent now. […] the real question is not so much whether we can get more fish from the sea but whether we can sustain the amount of fish we are harvesting at present”.


Atoll. Fringing reef. Barrier reef.
Broadcast spawning.
Acanthaster planci.
Coral bleaching. Ocean acidification.
Avicennia germinans. Pneumatophores. Lenticel.
Photophore. Lanternfish. Anglerfish. Black swallower.
Deep scattering layer. Taylor column.
Hydrothermal vent. Black smokers and white smokers. Chemosynthesis. Siboglinidae.
Intertidal zone. Tides. Tidal range.
Barnacle. Mussel.
Clupeidae. Gadidae. Scombridae.

March 16, 2018 Posted by | Biology, Books, Chemistry, Ecology, Evolutionary biology, Geology | Leave a comment

Safety-Critical Systems

Some related links to topics covered in the lecture:

Safety-critical system.
Safety engineering.
Fault tree analysis.
Failure mode and effects analysis.
Value of a statistical life.
ALARP principle.
Hazards and Risk (HSA).
Software system safety.
Aleatoric and epistemic uncertainty.
N-version programming.
An experimental evaluation of the assumption of independence in multiversion programming (Knight & Leveson).
Safety integrity level.
Software for Dependable Systems – Sufficient Evidence? (consensus study report).

March 15, 2018 Posted by | Computer science, Economics, Engineering, Lectures, Statistics | Leave a comment


Almost all the words included in this post are words which I encountered while reading the books The Mauritius Command, Desolation Island and You Don’t Have to Be Evil to Work Here, But it Helps.

Aleatory. Tenesmus. Celerity. Pelisse. Collop. Clem. Aviso. Crapulous. Farinaceous. Parturient. Tormina. Scend. Fascine. Distich. Appetency/appetence. Calipash. Tergiversation. Polypody. Prodigious. Teredo.

Rapacity. Cappabar. Chronometer. Figgy-dowdy. Chamade. Hauteur. Futtock. Obnubilate. Offing. Cleat. Trephine. Promulgate. Hieratic. Cockle. Froward. Aponeurosis. lixiviate. Cupellation. Plaice. Sharper.

Morosity. MephiticGlaucous. Libidinous. Grist. Tilbury. Surplice. Megrim. Cumbrous. Pule. Pintle. Fifer. Roadstead. Quadrumane. Peacoat. Burgher. Cuneate. Tundish. Bung. Fother.

Dégagé. Esculent. Genuflect. Lictor. Drogue. Oakum. Spume. Gudgeon. Firk. Mezzanine. Faff. Manky. Titchy. Sprocket. Conveyancing. Apportionment. Plonker. Flammulated. Cataract. Demersal.

March 15, 2018 Posted by | Books, Language | Leave a comment

Marine Biology (I)

This book was ‘okay’.

Some quotes and links related to the first half of the book below.


“The Global Ocean has come to be divided into five regional oceans – the Pacific, Atlantic, Indian, Arctic, and Southern Oceans […] These oceans are large, seawater-filled basins that share characteristic structural features […] The edge of each basin consists of a shallow, gently sloping extension of the adjacent continental land mass and is term the continental shelf or continental margin. Continental shelves typically extend off-shore to depths of a couple of hundred metres and vary from several kilometres to hundreds of kilometres in width. […] At the outer edge of the continental shelf, the seafloor drops off abruptly and steeply to form the continental slope, which extends down to depths of 2–3 kilometres. The continental slope then flattens out and gives way to a vast expanse of flat, soft, ocean bottom — the abyssal plain — which extends over depths of about 3–5 kilometres and accounts for about 76 per cent of the Global Ocean floor. The abyssal plains are transected by extensive mid-ocean ridges—underwater mountain chains […]. Mid-ocean ridges form a continuous chain of mountains that extend linearly for 65,000 kilometres across the floor of the Global Ocean basins […]. In some places along the edges of the abyssal plains the ocean bottom is cut by narrow, oceanic trenches or canyons which plunge to extraordinary depths — 3–4 kilometres below the surrounding seafloor — and are thousands of kilometres long but only tens of kilometres wide. […] Seamounts are another distinctive and dramatic feature of ocean basins. Seamounts are typically extinct volcanoes that rise 1,000 or more metres above the surrounding ocean but do not reach the surface of the ocean. […] Seamounts generally occur in chains or clusters in association with mid-ocean ridges […] The Global Ocean contains an estimated 100,000 or so seamounts that rise more than 1,000 metres above the surrounding deep-ocean floor. […] on a planetary scale, the surface of the Global Ocean is moving in a series of enormous, roughly circular, wind-driven current systems, or gyres […] These gyres transport enormous volumes of water and heat energy from one part of an ocean basin to another

“We now know that the oceans are literally teeming with life. Viruses […] are astoundingly abundant – there are around ten million viruses per millilitre of seawater. Bacteria and other microorganisms occur at concentrations of around 1 million per millilitre”

“The water in the oceans is in the form of seawater, a dilute brew of dissolved ions, or salts […] Chloride and sodium ions are the predominant salts in seawater, along with smaller amounts of other ions such as sulphate, magnesium, calcium, and potassium […] The total amount of dissolved salts in seawater is termed its salinity. Seawater typically has a salinity of roughly 35 – equivalent to about 35 grams of salts in one kilogram of seawater. […] Most marine organisms are exposed to seawater that, compared to the temperature extremes characteristic of terrestrial environments, ranges within a reasonably moderate range. Surface waters in tropical parts of ocean basins are consistently warm throughout the year, ranging from about 20–27°C […]. On the other hand, surface seawater in polar parts of ocean basins can get as cold as −1.9°C. Sea temperatures typically decrease with depth, but not in a uniform fashion. A distinct zone of rapid temperature transition is often present that separates warm seawater at the surface from cooler deeper seawater. This zone is called the thermocline layer […]. In tropical ocean waters the thermocline layer is a strong, well-defined and permanent feature. It may start at around 100 metres and be a hundred or so metres thick. Sea temperatures above the thermocline can be a tropical 25°C or more, but only 6–7°C just below the thermocline. From there the temperature drops very gradually with increasing depth. Thermoclines in temperate ocean regions are a more seasonal phenomenon, becoming well established in the summer as the sun heats up the surface waters, and then breaking down in the autumn and winter. Thermoclines are generally absent in the polar regions of the Global Ocean. […] As a rule of thumb, in the clearest ocean waters some light will penetrate to depths of 150-200 metres, with red light being absorbed within the first few metres and green and blue light penetrating the deepest. At certain times of the year in temperate coastal seas light may penetrate only a few tens of metres […] In the oceans, pressure increases by an additional atmosphere every 10 metres […] Thus, an organism living at a depth of 100 metres on the continental shelf experiences a pressure ten times greater than an organism living at sea level; a creature living at 5 kilometres depth on an abyssal plain experiences pressures some 500 times greater than at the surface”.

“With very few exceptions, dissolved oxygen is reasonably abundant throughout all parts of the Global Ocean. However, the amount of oxygen in seawater is much less than in air — seawater at 20°C contains about 5.4 millilitres of oxygen per litre of seawater, whereas air at this temperature contains about 210 millilitres of oxygen per litre. The colder the seawater, the more oxygen it contains […]. Oxygen is not distributed evenly with depth in the oceans. Oxygen levels are typically high in a thin surface layer 10–20 metres deep. Here oxygen from the atmosphere can freely diffuse into the seawater […] Oxygen concentration then decreases rapidly with depth and reaches very low levels, sometimes close to zero, at depths of around 200–1,000 metres. This region is referred to as the oxygen minimum zone […] This zone is created by the low rates of replenishment of oxygen diffusing down from the surface layer of the ocean, combined with the high rates of depletion of oxygen by decaying particulate organic matter that sinks from the surface and accumulates at these depths. Beneath the oxygen minimum zone, oxygen content increases again with depth such that the deep oceans contain quite high levels of oxygen, though not generally as high as in the surface layer. […] In contrast to oxygen, carbon dioxide (CO2) dissolves readily in seawater. Some of it is then converted into carbonic acid (H2CO3), bicarbonate ion (HCO3-), and carbonate ion (CO32-), with all four compounds existing in equilibrium with one another […] The pH of seawater is inversely proportional to the amount of carbon dioxide dissolved in it. […] the warmer the seawater, the less carbon dioxide it can absorb. […] Seawater is naturally slightly alkaline, with a pH ranging from about 7.5 to 8.5, and marine organisms have become well adapted to life within this stable pH range. […] In the oceans, carbon is never a limiting factor to marine plant photosynthesis and growth, as it is for terrestrial plants.”

“Since the beginning of the industrial revolution, the average pH of the Global Ocean has dropped by about 0.1 pH unit, making it 30 per cent more acidic than in pre-industrial times. […] As a result, more and more parts of the oceans are falling below a pH of 7.5 for longer periods of time. This trend, termed ocean acidification, is having profound impacts on marine organisms and the overall functioning of the marine ecosystem. For example, many types of marine organisms such as corals, clams, oysters, sea urchins, and starfish manufacture external shells or internal skeletons containing calcium carbonate. When the pH of seawater drops below about 7.5, calcium carbonate starts to dissolve, and thus the shells and skeletons of these organisms begin to erode and weaken, with obvious impacts on the health of the animal. Also, these organisms produce their calcium carbonate structures by combining calcium dissolved in seawater with carbonate ion. As the pH decreases, more of the carbonate ions in seawater become bound up with the increasing numbers of hydrogen ions, making fewer carbonate ions available to the organisms for shell-forming purposes. It thus becomes more difficult for these organisms to secrete their calcium carbonate structures and grow.”

“Roughly half of the planet’s primary production — the synthesis of organic compounds by chlorophyll-bearing organisms using energy from the sun—is produced within the Global Ocean. On land the primary producers are large, obvious, and comparatively long-lived — the trees, shrubs, and grasses characteristic of the terrestrial landscape. The situation is quite different in the oceans where, for the most part, the primary producers are minute, short-lived microorganisms suspended in the sunlit surface layer of the oceans. These energy-fixing microorganisms — the oceans’ invisible forest — are responsible for almost all of the primary production in the oceans. […] A large amount, perhaps 30-50 per cent, of marine primary production is produced by bacterioplankton comprising tiny marine photosynthetic bacteria ranging from about 0.5 to 2 μm in size. […] light availability and the strength of vertical mixing are important factors limiting primary production in the oceans. Nutrient availability is the other main factor limiting the growth of primary producers. One important nutrient is nitrogen […] nitrogen is a key component of amino acids, which are the building blocks of proteins. […] Photosynthetic marine organisms also need phosphorus, which is a requirement for many important biological functions, including the synthesis of nucleic acids, a key component of DNA. Phosphorus in the oceans comes naturally from the erosion of rocks and soils on land, and is transported into the oceans by rivers, much of it in the form of dissolved phosphate (PO43−), which can be readily absorbed by marine photosynthetic organisms. […] Inorganic nitrogen and phosphorus compounds are abundant in deep-ocean waters. […] In practice, inorganic nitrogen and phosphorus compounds are not used up at exactly the same rate. Thus one will be depleted before the other and becomes the limiting nutrient at the time, preventing further photosynthesis and growth of marine primary producers until it is replenished. Nitrogen is often considered to be the rate-limiting nutrient in most oceanic environments, particularly in the open ocean. However, in coastal waters phosphorus is often the rate-limiting nutrient.”

“The overall pattern of primary production in the Global Ocean depends greatly on latitude […] In polar oceans primary production is a boom-and-bust affair driven by light availability. Here the oceans are well mixed throughout the year so nutrients are rarely limiting. However, during the polar winter there is no light, and thus no primary production is taking place. […] Although limited to a short seasonal pulse, the total amount of primary production can be quite high, especially in the polar Southern Ocean […] In tropical open oceans, primary production occurs at a low level throughout the year. Here light is never limiting but the permanent tropical thermocline prevents the mixing of deep, nutrient-rich seawater with the surface waters. […] open-ocean tropical waters are often referred to as ‘marine deserts’, with productivity […] comparable to a terrestrial desert. In temperate open-ocean regions, primary productivity is linked closely to seasonal events. […] Although occurring in a number of pulses, primary productivity in temperate oceans [is] similar to [that of] a temperate forest or grassland. […] Some of the most productive marine environments occur in coastal ocean above the continental shelves. This is the result of a phenomenon known as coastal upwelling which brings deep, cold, nutrient-rich seawater to the ocean surface, creating ideal conditions for primary productivity […], comparable to a terrestrial rainforest or cultivated farmland. These hotspots of marine productivity are created by wind acting in concert with the planet’s rotation. […] Coastal upwelling can occur when prevailing winds move in a direction roughly parallel to the edge of a continent so as to create offshore Ekman transport. Coastal upwelling is particularly prevalent along the west coasts of continents. […] Since coastal upwelling is dependent on favourable winds, it tends to be a seasonal or intermittent phenomenon and the strength of upwelling will depend on the strength of the winds. […] Important coastal upwelling zones around the world include the coasts of California, Oregon, northwest Africa, and western India in the northern hemisphere; and the coasts of Chile, Peru, and southwest Africa in the southern hemisphere. These regions are amongst the most productive marine ecosystems on the planet.”

“Considering the Global Ocean as a whole, it is estimated that total marine primary production is about 50 billion tonnes of carbon per year. In comparison, the total production of land plants, which can also be estimated using satellite data, is estimated at around 52 billion tonnes per year. […] Primary production in the oceans is spread out over a much larger surface area and so the average productivity per unit of surface area is much smaller than on land. […] the energy of primary production in the oceans flows to higher trophic levels through several different pathways of various lengths […]. Some energy is lost along each step of the pathway — on average the efficiency of energy transfer from one trophic level to the next is about 10 per cent. Hence, shorter pathways are more efficient. Via these pathways, energy ultimately gets transferred to large marine consumers such as large fish, marine mammals, marine turtles, and seabirds.”

“…it has been estimated that in the 17th century, somewhere between fifty million and a hundred million green turtles inhabited the Caribbean Sea, but numbers are now down to about 300,000. Since their numbers are now so low, their impact on seagrass communities is currently small, but in the past, green turtles would have been extraordinarily abundant grazers of seagrasses. It appears that in the past, green turtles thinned out seagrass beds, thereby reducing direct competition among different species of seagrass and allowing several species of seagrass to coexist. Without green turtles in the system, seagrass beds are generally overgrown monocultures of one dominant species. […] Seagrasses are of considerable importance to human society. […] It is therefore of great concern that seagrass meadows are in serious decline globally. In 2003 it was estimated that 15 per cent of the planet’s existing seagrass beds had disappeared in the preceding ten years. Much of this is the result of increasing levels of coastal development and dredging of the seabed, activities which release excessive amounts of sediment into coastal waters which smother seagrasses. […] The number of marine dead zones in the Global Ocean has roughly doubled every decade since the 1960s”.

“Sea ice is habitable because, unlike solid freshwater ice, it is a very porous substance. As sea ice forms, tiny spaces between the ice crystals become filled with a highly saline brine solution resistant to freezing. Through this process a three-dimensional network of brine channels and spaces, ranging from microscopic to several centimetres in size, is created within the sea ice. These channels are physically connected to the seawater beneath the ice and become colonized by a great variety of marine organisms. A significant amount of the primary production in the Arctic Ocean, perhaps up to 50 per cent in those areas permanently covered by sea ice, takes place in the ice. […] Large numbers of zooplanktonic organisms […] swarm about on the under surface of the ice, grazing on the ice community at the ice-seawater interface, and sheltering in the brine channels. […] These under-ice organisms provide the link to higher trophic levels in the Arctic food web […] They are an important food source for fish such as Arctic cod and glacial cod that graze along the bottom of the ice. These fish are in turn fed on by squid, seals, and whales.”

“[T]he Antarctic marine system consists of a ring of ocean about 10° of latitude wide – roughly 1,000 km. […] The Arctic and Antarctic marine systems can be considered geographic opposites. In contrast to the largely landlocked Arctic Ocean, the Southern Ocean surrounds the Antarctic continental land mass and is in open contact with the Atlantic, Indian, and Pacific Oceans. Whereas the Arctic Ocean is strongly influenced by river inputs, the Antarctic continent has no rivers, and so hard-bottomed seabed is common in the Southern Ocean, and there is no low-saline surface layer, as in the Arctic Ocean. Also, in contrast to the Arctic Ocean with its shallow, broad continental shelves, the Antarctic continental shelf is very narrow and steep. […] Antarctic waters are extremely nutrient rich, fertilized by a permanent upwelling of seawater that has its origins at the other end of the planet. […] This continuous upwelling of cold, nutrient-rich seawater, in combination with the long Antarctic summer day length, creates ideal conditions for phytoplankton growth, which drives the productivity of the Antarctic marine system. As in the Arctic, a well-developed sea-ice community is present. Antarctic ice algae are even more abundant and productive than in the Arctic Ocean because the sea ice is thinner, and there is thus more available light for photosynthesis. […] Antarctica’s most important marine species [is] the Antarctic krill […] Krill are very adept at surviving many months under starvation conditions — in the laboratory they can endure more than 200 days without food. During the winter months they lower their metabolic rate, shrink in body size, and revert back to a juvenile state. When food once again becomes abundant in the spring, they grow rapidly […] As the sea ice breaks up they leave the ice and begin feeding directly on the huge blooms of free-living diatoms […]. With so much food available they grow and reproduce quickly, and start to swarm in large numbers, often at densities in excess of 10,000 individuals per cubic metre — dense enough to colour the seawater a reddish-brown. Krill swarms are patchy and vary greatly in size […] Because the Antarctic marine system covers a large area, krill numbers are enormous, estimated at about 600 billion animals on average, or 500 million tonnes of krill. This makes Antarctic krill one of the most abundant animal species on the planet […] Antarctic krill are the main food source for many of Antarctica’s large marine animals, and a key link in a very short and efficient food chain […]. Krill comprise the staple diet of icefish, squid, baleen whales, leopard seals, fur seals, crabeater seals, penguins, and seabirds, including albatross. Thus, a very simple and efficient three-step food chain is in operation — diatoms eaten by krill in turn eaten by a suite of large consumers — which supports the large numbers of large marine animals living in the Southern Ocean.”


Ocean gyre. North Atlantic Gyre. Thermohaline circulation. North Atlantic Deep Water. Antarctic bottom water.
Cyanobacteria. Diatom. Dinoflagellate. Coccolithophore.
Trophic level.
Nitrogen fixation.
High-nutrient, low-chlorophyll regions.
Light and dark bottle method of measuring primary productivity. Carbon-14 method for estimating primary productivity.
Ekman spiral.
Peruvian anchoveta.
El Niño. El Niño–Southern Oscillation.
Dissolved organic carbon. Particulate organic matter. Microbial loop.
Kelp forest. Macrocystis. Sea urchin. Urchin barren. Sea otter.
Green sea turtle.
Demersal fish.
Eutrophication. Harmful algal bloom.
Comb jelly. Asterias amurensis.
Great Pacific garbage patch.
Eelpout. Sculpin.
Crabeater seal.
Adélie penguin.
Anchor ice mortality.

March 13, 2018 Posted by | Biology, Books, Botany, Chemistry, Ecology, Geology, Zoology | Leave a comment


i. “One ground for suspicion of apparently sincere moral convictions is their link with some special interest of those who hold them. The questions cui bono and cui malo are appropriate questions to raise when we are searching for possible contaminants of conscience. Entrenched privilege, and fear of losing it, distorts one’s moral sense.” (Annette Baier)

ii. “Most people do not listen with the intent to understand; they listen with the intent to reply.” (Stephen Covey)

iii. “Plastic surgery is a way for people to buy themselves a few years before they have to truly confront what ageing is, which of course is not that your looks are falling apart, but that you are falling apart and some-day you will have fallen apart and ceased to exist.” (Nora Ephron)

iv. “Just because you know a thing is true in theory, doesn’t make it true in fact. The barbaric religions of primitive worlds hold not a germ of scientific fact, though they claim to explain all. Yet if one of these savages has all the logical ground for his beliefs taken away — he doesn’t stop believing. He then calls his mistaken beliefs ‘faith’ because he knows they are right. And he knows they are right because he has faith. This is an unbreakable circle of false logic that can’t be touched. In reality, it is plain mental inertia.” (Harry Harrison)

v. “A taste is almost defined as a preference about which you do not argue — de gustibus non est disputandum. A taste about which you argue, with others or yourself, ceases ipso facto being a taste – it turns into a value.” (Albert O. Hirschman)

vi. “I will be ashamed the day I feel I should knuckle under to social-political pressures about issues and research I think are important for the advance of scientific knowledge.” (Arthur Jensen)

vii. “My theory is that we are all idiots. The people who don’t think they’re idiots — they’re the ones that are dangerous.” (Erik Sykes)

viii. “What you get by achieving your goals is not as important as what you become by achieving your goals.” (Zig Ziglar)

ix. “If you go looking for a friend, you’re going to find they’re very scarce. If you go out to be a friend, you’ll find them everywhere.” (-ll-)

x. “The rights of individuals to the use of resources (i.e., property rights) in any society are to be construed as supported by the force of etiquette, social custom, ostracism, and formal legally enacted laws supported by the states’ power of violence of punishment. Many of the constraints on the use of what we call private property involve the force of etiquette and social ostracism. The level of noise, the kind of clothes we wear, our intrusion on other people’s privacy are restricted not merely by laws backed by police force, but by social acceptance, reciprocity, and voluntary social ostracism for violators of accepted codes of conduct.” (Armen Alchian)

xi. “Whenever undiscussables exist, their existence is also undiscussable. Moreover, both are covered up, because rules that make important issues undiscussables violate espoused norms…” (Chris Argyris)

xii. “Experience can be merely the repetition of […the? – US] same error often enough.” (John Azzopardi)

xiii. “Empathize with stupidity and you’re halfway to thinking like an idiot.” (Ian Banks)

xiv. “A man in daily muddy contact with field experiments could not be expected to have much faith in any direct assumption of independently distributed normal errors.” (George E. P. Box)

xv. “There is nothing that makes the mind more elastic and expandable than discovering how the world works.” (Edgar Bronfman, Sr.)

xvi. “I don’t give advice. I can’t tell anybody what to do. Instead I say this is what we know about this problem at this time. And here are the consequences of these actions.” (Joyce Diane Brothers)

xvii. “Don’t fool yourself that you are going to have it all. You are not. Psychologically, having it all is not even a valid concept. The marvelous thing about human beings is that we are perpetually reaching for the stars. The more we have, the more we want. And for this reason, we never have it all.” (-ll-)

xviii. “We control fifty percent of a relationship. We influence one hundred percent of it.” (-ll-)

xix. “Being taken for granted can be a compliment. It means that you’ve become a comfortable, trusted element in another person’s life.” (-ll-)

xx. “The world at large does not judge us by who we are and what we know; it judges us by what we have.” (-ll-)

March 5, 2018 Posted by | Quotes/aphorisms | Leave a comment


The words included in this post are words which I encountered while reading Patrick O’Brian’s books Post Captain and HMS Surprise. As was also the case the last time I posted one of these posts, I had to include ~100 words, instead of the ~80 I have come to consider ‘the standard’ for these posts, in order to include all the words of interest which I encountered in the books.

MésallianceMansuetude. Wen. Raffish. Stave. Gorse. Lurcher. Improvidence/improvident. Sough. Bowse. Mump. Jib. Tipstaff. Squalid. Strum. Hussif. Dowdy. Cognoscent. Footpad. Quire.

Vacillation. Wantonness. Escritoire/scrutoire. Mantua. Shindy. Vinous. Top-hamper. Holystone. Keelson. Bollard/bitts. Wicket. Paling. Brace (sailing). Coxcomb. Foin. Stern chaser. Galliot. Postillion. Coot. Fanfaronade.

Malversation. Arenaceous. Tope. Shebeen. Lithotomy. Quoin/coign. Mange. Curricle. Cockade. Spout. Bistoury. Embrasure. Acushla. Circumambulation. Glabrous. Impressment. Transpierce. Dilatoriness. Conglobate. Murrain.

Anfractuous/anfractuosity. Conversible. Tunny. Weevil. Posset. Sponging-house. Salmagundi. Hugger-mugger. Euphroe. Jobbery. Dun. Privity. Intension. Shaddock. Catharpin. Peccary. Tarpaulin. Frap. Bombinate. Spirketing.

Glacis. Gymnosophist. Fibula. Dreary. Barouche. Syce. Carmine. Lustration. Rood. Timoneer. Crosstrees. Luff. Mangosteeen. Methitic. Superfetation. Pledget. Innominate. Jibboom. Pilau. Ataraxy.

February 27, 2018 Posted by | Books, Language | Leave a comment

The Ice Age (II)

I really liked the book, recommended if you’re at all interested in this kind of stuff. Below some observations from the book’s second half, and some related links:

“Charles MacLaren, writing in 1842, […] argued that the formation of large ice sheets would result in a fall in sea level as water was taken from the oceans and stored frozen on the land. This insight triggered a new branch of ice age research – sea level change. This topic can get rather complicated because as ice sheets grow, global sea level falls. This is known as eustatic sea level change. As ice sheets increase in size, their weight depresses the crust and relative sea level will rise. This is known as isostatic sea level change. […] It is often quite tricky to differentiate between regional-scale isostatic factors and the global-scale eustatic sea level control.”

“By the late 1870s […] glacial geology had become a serious scholarly pursuit with a rapidly growing literature. […] [In the late 1880s] Carvill Lewis […] put forward the radical suggestion that the [sea] shells at Moel Tryfan and other elevated localities (which provided the most important evidence for the great marine submergence of Britain) were not in situ. Building on the earlier suggestions of Thomas Belt (1832–78) and James Croll, he argued that these materials had been dredged from the sea bed by glacial ice and pushed upslope so that ‘they afford no testimony to the former subsidence of the land’. Together, his recognition of terminal moraines and the reworking of marine shells undermined the key pillars of Lyell’s great marine submergence. This was a crucial step in establishing the primacy of glacial ice over icebergs in the deposition of the drift in Britain. […] By the end of the 1880s, it was the glacial dissenters who formed the eccentric minority. […] In the period leading up to World War One, there was [instead] much debate about whether the ice age involved a single phase of ice sheet growth and freezing climate (the monoglacial theory) or several phases of ice sheet build up and decay separated by warm interglacials (the polyglacial theory).”

“As the Earth rotates about its axis travelling through space in its orbit around the Sun, there are three components that change over time in elegant cycles that are entirely predictable. These are known as eccentricity, precession, and obliquity or ‘stretch, wobble, and roll’ […]. These orbital perturbations are caused by the gravitational pull of the other planets in our Solar System, especially Jupiter. Milankovitch calculated how each of these orbital cycles influenced the amount of solar radiation received at different latitudes over time. These are known as Milankovitch Cycles or Croll–Milankovitch Cycles to reflect the important contribution made by both men. […] The shape of the Earth’s orbit around the Sun is not constant. It changes from an almost circular orbit to one that is mildly elliptical (a slightly stretched circle) […]. This orbital eccentricity operates over a 400,000- and 100,000-year cycle. […] Changes in eccentricity have a relatively minor influence on the total amount of solar radiation reaching the Earth, but they are important for the climate system because they modulate the influence of the precession cycle […]. When eccentricity is high, for example, axial precession has a greater impact on seasonality. […] The Earth is currently tilted at an angle of 23.4° to the plane of its orbit around the Sun. Astronomers refer to this axial tilt as obliquity. This angle is not fixed. It rolls back and forth over a 41,000-year cycle from a tilt of 22.1° to 24.5° and back again […]. Even small changes in tilt can modify the strength of the seasons. With a greater angle of tilt, for example, we can have hotter summers and colder winters. […] Cooler, reduced insolation summers are thought to be a key factor in the initiation of ice sheet growth in the middle and high latitudes because they allow more snow to survive the summer melt season. Slightly warmer winters may also favour ice sheet build-up as greater evaporation from a warmer ocean will increase snowfall over the centres of ice sheet growth. […] The Earth’s axis of rotation is not fixed. It wobbles like a spinning top slowing down. This wobble traces a circle on the celestial sphere […]. At present the Earth’s rotational axis points toward Polaris (the current northern pole star) but in 11,000 years it will point towards another star, Vega. This slow circling motion is known as axial precession and it has important impacts on the Earth’s climate by causing the solstices and equinoxes to move around the Earth’s orbit. In other words, the seasons shift over time. Precession operates over a 19,000- and 23,000-year cycle. This cycle is often referred to as the Precession of the Equinoxes.”

The albedo of a surface is a measure of its ability to reflect solar energy. Darker surfaces tend to absorb most of the incoming solar energy and have low albedos. The albedo of the ocean surface in high latitudes is commonly about 10 per cent — in other words, it absorbs 90 per cent of the incoming solar radiation. In contrast, snow, glacial ice, and sea ice have much higher albedos and can reflect between 50 and 90 per cent of incoming solar energy back into the atmosphere. The elevated albedos of bright frozen surfaces are a key feature of the polar radiation budget. Albedo feedback loops are important over a range of spatial and temporal scales. A cooling climate will increase snow cover on land and the extent of sea ice in the oceans. These high albedo surfaces will then reflect more solar radiation to intensify and sustain the cooling trend, resulting in even more snow and sea ice. This positive feedback can play a major role in the expansion of snow and ice cover and in the initiation of a glacial phase. Such positive feedbacks can also work in reverse when a warming phase melts ice and snow to reveal dark and low albedo surfaces such as peaty soil or bedrock.”

“At the end of the Cretaceous, around 65 million years ago (Ma), lush forests thrived in the Polar Regions and ocean temperatures were much warmer than today. This warm phase continued for the next 10 million years, peaking during the Eocene thermal maximum […]. From that time onwards, however, Earth’s climate began a steady cooling that saw the initiation of widespread glacial conditions, first in Antarctica between 40 and 30 Ma, in Greenland between 20 and 15 Ma, and then in the middle latitudes of the northern hemisphere around 2.5 Ma. […] Over the past 55 million years, a succession of processes driven by tectonics combined to cool our planet. It is difficult to isolate their individual contributions or to be sure about the details of cause and effect over this long period, especially when there are uncertainties in dating and when one considers the complexity of the climate system with its web of internal feedbacks.” [Potential causes which have been highlighted include: The uplift of the Himalayas (leading to increased weathering, leading over geological time to an increased amount of CO2 being sequestered in calcium carbonate deposited on the ocean floor, lowering atmospheric CO2 levels), the isolation of Antarctica which created the Antarctic Circumpolar Current (leading to a cooling of Antarctica), the dry-out of the Mediterranean Sea ~5mya (which significantly lowered salt concentrations in the World Ocean, meaning that sea water froze at a higher temperature), and the formation of the Isthmus of Panama. – US].

“[F]or most of the last 1 million years, large ice sheets were present in the middle latitudes of the northern hemisphere and sea levels were lower than today. Indeed, ‘average conditions’ for the Quaternary Period involve much more ice than present. The interglacial peaks — such as the present Holocene interglacial, with its ice volume minima and high sea level — are the exception rather than the norm. The sea level maximum of the Last Interglacial (MIS 5) is higher than today. It also shows that cold glacial stages (c.80,000 years duration) are much longer than interglacials (c.15,000 years). […] Arctic willow […], the northernmost woody plant on Earth, is found in central European pollen records from the last glacial stage. […] For most of the Quaternary deciduous forests have been absent from most of Europe. […] the interglacial forests of temperate Europe that are so familiar to us today are, in fact, rather atypical when we consider the long view of Quaternary time. Furthermore, if the last glacial period is representative of earlier ones, for much of the Quaternary terrestrial ecosystems were continuously adjusting to a shifting climate.”

“Greenland ice cores typically have very clear banding […] that corresponds to individual years of snow accumulation. This is because the snow that falls in summer under the permanent Arctic sun differs in texture to the snow that falls in winter. The distinctive paired layers can be counted like tree rings to produce a finely resolved chronology with annual and even seasonal resolution. […] Ice accumulation is generally much slower in Antarctica, so the ice core record takes us much further back in time. […] As layers of snow become compacted into ice, air bubbles recording the composition of the atmosphere are sealed in discrete layers. This fossil air can be recovered to establish the changing concentration of greenhouse gases such as carbon dioxide (CO2) and methane (CH4). The ice core record therefore allows climate scientists to explore the processes involved in climate variability over very long timescales. […] By sampling each layer of ice and measuring its oxygen isotope composition, Dansgaard produced an annual record of air temperature for the last 100,000 years. […] Perhaps the most startling outcome of this work was the demonstration that global climate could change extremely rapidly. Dansgaard showed that dramatic shifts in mean air temperature (>10°C) had taken place in less than a decade. These findings were greeted with scepticism and there was much debate about the integrity of the Greenland record, but subsequent work from other drilling sites vindicated all of Dansgaard’s findings. […] The ice core records from Greenland reveal a remarkable sequence of abrupt warming and cooling cycles within the last glacial stage. These are known as Dansgaard–Oeschger (D–O) cycles. […] [A] series of D–O cycles between 65,000 and 10,000 years ago [caused] mean annual air temperatures on the Greenland ice sheet [to be] shifted by as much as 10°C. Twenty-five of these rapid warming events have been identified during the last glacial period. This discovery dispelled the long held notion that glacials were lengthy periods of stable and unremitting cold climate. The ice core record shows very clearly that even the glacial climate flipped back and forth. […] D–O cycles commence with a very rapid warming (between 5 and 10°C) over Greenland followed by a steady cooling […] Deglaciations are rapid because positive feedbacks speed up both the warming trend and ice sheet decay. […] The ice core records heralded a new era in climate science: the study of abrupt climate change. Most sedimentary records of ice age climate change yield relatively low resolution information — a thousand years may be packed into a few centimetres of marine or lake sediment. In contrast, ice cores cover every year. They also retain a greater variety of information about the ice age past than any other archive. We can even detect layers of volcanic ash in the ice and pinpoint the date of ancient eruptions.”

“There are strong thermal gradients in both hemispheres because the low latitudes receive the most solar energy and the poles the least. To redress these imbalances the atmosphere and oceans move heat polewards — this is the basis of the climate system. In the North Atlantic a powerful surface current takes warmth from the tropics to higher latitudes: this is the famous Gulf Stream and its northeastern extension the North Atlantic Drift. Two main forces drive this current: the strong southwesterly winds and the return flow of colder, saltier water known as North Atlantic Deep Water (NADW). The surface current loses much of its heat to air masses that give maritime Europe a moist, temperate climate. Evaporative cooling also increases its salinity so that it begins to sink. As the dense and cold water sinks to the deep ocean to form NADW, it exerts a strong pull on the surface currents to maintain the cycle. It returns south at depths >2,000 m. […] The thermohaline circulation in the North Atlantic was periodically interrupted during Heinrich Events when vast discharges of melting icebergs cooled the ocean surface and reduced its salinity. This shut down the formation of NADW and suppressed the Gulf Stream.”


Archibald Geikie.
Andrew Ramsay (geologist).
Albrecht Penck. Eduard BrücknerGunz glaciation. Mindel glaciation. Riss glaciation. Würm.
Perihelion and aphelion.
Deep Sea Drilling Project.
δ18O. Isotope fractionation.
Marine isotope stage.
Cesare Emiliani.
Nicholas Shackleton.
Brunhes–Matuyama reversal. Geomagnetic reversal. Magnetostratigraphy.
Climate: Long range Investigation, Mapping, and Prediction (CLIMAP).
Uranium–thorium dating. Luminescence dating. Optically stimulated luminescence. Cosmogenic isotope dating.
The role of orbital forcing in the Early-Middle Pleistocene Transition (paper).
European Project for Ice Coring in Antarctica (EPICA).
Younger Dryas.
Lake Agassiz.
Greenland ice core project (GRIP).
J Harlen Bretz. Missoula Floods.
Pleistocene megafauna.

February 25, 2018 Posted by | Astronomy, Engineering, Geology, History, Paleontology, Physics | Leave a comment

Sieve methods: what are they, and what are they good for?

Given the nature of the lecture it was difficult to come up with relevant links to include in this post, but these seemed relevant enough to include them here:

Sieve theory.
Inclusion–exclusion principle.
Fundamental lemma of sieve theory.
Parity problem (sieve theory).
Viggo Brun (the lecturer mentions along the way that many of the things he talks about in this lecture are things this guy figured out, but the wiki article is unfortunately very short).

As he notes early on, when working with sieves we’re: “*Interested in objects which are output of some inclusion-exclusion process & *Rather than counting precisely, we want to gain good bounds, but work flexibly.”

‘Counting’ should probably be interpreted loosely here, in the general scheme of things; sieves are mostly used in number theory, but as Maynard mentions presumably similar methods can be used in other mathematical contexts – thus the deliberate use of the word ‘objects’. It seems to be all about trying to ascertain some properties about some objects/sets/whatever, without necessarily imposing much structure (‘are we within the right order of magnitude?’ rather than ‘did we get them all?’). The basic idea behind restricting the amount of structure imposed is, as far as I gathered from the lecture, to make the problem you’re faced with more tractable.

February 24, 2018 Posted by | Lectures, Mathematics | Leave a comment