Econstudentlog

Geophysics (I)

“Geophysics is a field of earth sciences that uses the methods of physics to investigate the physical properties of the Earth and the processes that have determined and continue to govern its evolution. Geophysical investigations cover a wide range of research fields, extending from surface changes that can be observed from Earth-orbiting satellites to unseen behaviour in the Earth’s deep interior. […] This book presents a general overview of the principal methods of geophysics that have contributed to our understanding of Planet Earth and how it works.”

I gave this book five stars on goodreads, where I deemed it: ‘An excellent introduction to the topic, with high-level yet satisfactorily detailed coverage of many areas of interest.’ It doesn’t cover these topics in the amount of detail they’re covered in books like Press & Siever (…a book which I incidentally covered, though not in much detail, here and here), but it’s a very decent introductory book on these topics. I have added some observations and links related to the first half of the book’s coverage below.

“The gravitational attractions of the other planets — especially Jupiter, whose mass is 2.5 times the combined mass of all the other planets — influence the Earth’s long-term orbital rotations in a complex fashion. The planets move with different periods around their differently shaped and sized orbits. Their gravitational attractions impose fluctuations on the Earth’s orbit at many frequencies, a few of which are more significant than the rest. One important effect is on the obliquity: the amplitude of the axial tilt is forced to change rhythmically between a maximum of 24.5 degrees and a minimum of 22.1 degrees with a period of 41,000 yr. Another gravitational interaction with the other planets causes the orientation of the elliptical orbit to change with respect to the stars […]. The line of apsides — the major axis of the ellipse — precesses around the pole to the ecliptic in a prograde sense (i.e. in the same sense as the Earth’s rotation) with a period of 100,000 yr. This is known as planetary precession. Additionally, the shape of the orbit changes with time […], so that the eccentricity varies cyclically between 0.005 (almost circular) and a maximum of 0.058; currently it is 0.0167 […]. The dominant period of the eccentricity fluctuation is 405,000 yr, on which a further fluctuation of around 100,000 yr is superposed, which is close to the period of the planetary precession.”

“The amount of solar energy received by a unit area of the Earth’s surface is called the insolation. […] The long-term fluctuations in the Earth’s rotation and orbital parameters influence the insolation […] and this causes changes in climate. When the obliquity is smallest, the axis is more upright with respect to the ecliptic than at present. The seasonal differences are then smaller and vary less between polar and equatorial regions. Conversely, a large axial tilt causes an extreme difference between summer and winter at all latitudes. The insolation at any point on the Earth thus changes with the obliquity cycle. Precession of the axis also changes the insolation. At present the north pole points away from the Sun at perihelion; one half of a precessional cycle later it will point away from the Sun at aphelion. This results in a change of insolation and an effect on climate with a period equal to that of the precession. The orbital eccentricity cycle changes the Earth–Sun distances at perihelion and aphelion, with corresponding changes in insolation. When the orbit is closest to being circular, the perihelion–aphelion difference in insolation is smallest, but when the orbit is more elongate this difference increases. In this way the changes in eccentricity cause long-term variations in climate. The periodic climatic changes due to orbital variations are called Milankovitch cycles, after the Serbian astronomer Milutin Milankovitch, who studied them systematically in the 1920s and 1930s. […] The evidence for cyclical climatic variations is found in geological sedimentary records and in long cores drilled into the ice on glaciers and in polar regions. […] Sedimentation takes place slowly over thousands of years, during which the Milankovitch cycles are recorded in the physical and chemical properties of the sediments. Analyses of marine sedimentary sequences deposited in the deep oceans over millions of years have revealed cyclical variations in a number of physical properties. Examples are bedding thickness, sediment colour, isotopic ratios, and magnetic susceptibility. […] The records of oxygen isotope ratios in long ice cores display Milankovitch cycles and are important evidence for the climatic changes, generally referred to as orbital forcing, which are brought about by the long-term variations in the Earth’s orbit and axial tilt.”

Stress is defined as the force acting on a unit area. The fractional deformation it causes is called strain. The stress–strain relationship describes the mechanical behaviour of a material. When subjected to a low stress, materials deform in an elastic manner so that stress and strain are proportional to each other and the material returns to its original unstrained condition when the stress is removed. Seismic waves usually propagate under conditions of low stress. If the stress is increased progressively, a material eventually reaches its elastic limit, beyond which it cannot return to its unstrained state. Further stress causes disproportionately large strain and permanent deformation. Eventually the stress causes the material to reach its breaking point, at which it ruptures. The relationship between stress and strain is an important aspect of seismology. Two types of elastic deformation—compressional and shear—are important in determining how seismic waves propagate in the Earth. Imagine a small block that is subject to a deforming stress perpendicular to one face of the block; this is called a normal stress. The block shortens in the direction it is squeezed, but it expands slightly in the perpendicular direction; when stretched, the opposite changes of shape occur. These reversible elastic changes depend on how the material responds to compression or tension. This property is described by a physical parameter called the bulk modulus. In a shear deformation, the stress acts parallel to the surface of the block, so that one edge moves parallel to the opposite edge, changing the shape but not the volume of the block. This elastic property is described by a parameter called the shear modulus. An earthquake causes normal and shear strains that result in four types of seismic wave. Each type of wave is described by two quantities: its wavelength and frequency. The wavelength is the distance between successive peaks of a vibration, and the frequency is the number of vibrations per second. Their product is the speed of the wave.”

“A seismic P-wave (also called a primary, compressional, or longitudinal wave) consists of a series of compressions and expansions caused by particles in the ground moving back and forward parallel to the direction in which the wave travels […] It is the fastest seismic wave and can pass through fluids, although with reduced speed. When it reaches the Earth’s surface, a P-wave usually causes nearly vertical motion, which is recorded by instruments and may be felt by people but usually does not result in severe damage. […] A seismic S-wave (i.e. secondary or shear wave) arises from shear deformation […] It travels by means of particle vibrations perpendicular to the direction of travel; for that reason it is also known as a transverse wave. The shear wave vibrations are further divided into components in the horizontal and vertical planes, labelled the SH- and SV-waves, respectively. […] an S-wave is slower than a P-wave, propagating about 58 per cent as fast […] Moreover, shear waves can only travel in a material that supports shear strain. This is the case for a solid object, in which the molecules have regular locations and intermolecular forces hold the object together. By contrast, a liquid (or gas) is made up of independent molecules that are not bonded to each other, and thus a fluid has no shear strength. For this reason S-waves cannot travel through a fluid. […] S-waves have components in both the horizontal and vertical planes, so when they reach the Earth’s surface they shake structures from side to side as well as up and down. They can have larger amplitudes than P-waves. Buildings are better able to resist up-and-down motion than side-to-side shaking, and as a result SH-waves can cause serious damage to structures. […] Surface waves spread out along the Earth’s surface around a point – called the epicentre – located vertically above the earthquake’s source […] Very deep earthquakes usually do not produce surface waves, but the surface waves caused by shallow earthquakes are very destructive. In contrast to seismic body waves, which can spread out in three dimensions through the Earth’s interior, the energy in a seismic surface wave is guided by the free surface. It is only able to spread out in two dimensions and is more concentrated. Consequently, surface waves have the largest amplitudes on the seismogram of a shallow earthquake […] and are responsible for the strongest ground motions and greatest damage. There are two types of surface wave. [Rayleigh waves & Love waves, US].”

“The number of earthquakes that occur globally each year falls off with increasing magnitude. Approximately 1.4 million earthquakes annually have magnitude 2 or larger; of these about 1,500 have magnitude 5 or larger. The number of very damaging earthquakes with magnitude above 7 varies from year to year but has averaged about 15-20 annually since 1900. On average, one earthquake per year has magnitude 8 or greater, although such large events occur at irregular intervals. A magnitude 9 earthquake may release more energy than the cumulative energy of all other earthquakes in the same year. […] Large earthquakes may be preceded by foreshocks, which are lesser events that occur shortly before and in the same region as the main shock. They indicate the build-up of stress that leads to the main rupture. Large earthquakes are also followed by smaller aftershocks on the same fault or near to it; their frequency decreases as time passes, following the main shock. Aftershocks may individually be large enough to have serious consequences for a damaged region, because they can cause already weakened structures to collapse. […] About 90 per cent of the world’s earthquakes and 75 per cent of its volcanoes occur in the circum-Pacific belt known as the ‘Ring of Fire‘. […] The relative motions of the tectonic plates at their margins, together with changes in the state of stress within the plates, are responsible for most of the world’s seismicity. Earthquakes occur much more rarely in the geographic interiors of the plates. However, large intraplate earthquakes do occur […] In 2001 an intraplate earthquake with magnitude 7.7 occurred on a previously unknown fault under Gujarat, India […], killing 20,000 people and destroying 400,000 homes. […] Earthquakes are a serious hazard for populations, their property, and the natural environment. Great effort has been invested in the effort to predict their occurrence, but as yet without general success. […] Scientists have made more progress in assessing the possible location of an earthquake than in predicting the time of its occurrence. Although a damaging event can occur whenever local stress in the crust exceeds the breaking point of underlying rocks, the active seismic belts where this is most likely to happen are narrow and well defined […]. Unfortunately many densely populated regions and great cities are located in some of the seismically most active regions.[…] it is not yet possible to forecast reliably where or when an earthquake will occur, or how large it is likely to be.”

Links:

Plate tectonics.
Geodesy.
Seismology. Seismometer.
Law of conservation of energy. Second law of thermodynamics (This book incidentally covers these topics in much more detail, and does it quite well – US).
Angular momentum.
Big Bang model. Formation and evolution of the Solar System (…I should probably mention here that I do believe Wikipedia covers these sorts of topics quite well).
Invariable plane. Ecliptic.
Newton’s law of universal gravitation.
Kepler’s laws of planetary motion.
Potential energy. Kinetic energy. Orbital eccentricity. Line of apsides. Axial tilt. Figure of the Earth. Nutation. Chandler wobble.
Torque. Precession.
Very-long-baseline interferometry.
Reflection seismology.
Geophone.
Seismic shadow zone. Ray tracing (physics).
Structure of the Earth. Core–mantle boundary. D” region. Mohorovičić discontinuity. Lithosphere. Asthenosphere. Mantle transition zone.
Peridotite. Olivine. Perovskite.
Seismic tomography.
Lithoprobe project.
Orogenic belt.
European Geotraverse ProjectEuropean Geotraverse Project.
Microseism. Seismic noise.
Elastic-rebound theory. Fault (geology).
Richter magnitude scale (…of note: “the Richter scale underestimates the size of very large earthquakes with magnitudes greater than about 8.5”). Seismic moment. Moment magnitude scale. Modified Mercalli intensity scale. European macroseismic scale.
Focal mechanism.
Transform fault. Euler pole. Triple junction.
Megathrust earthquake.
Alpine fault. East African Rift.

Advertisements

November 1, 2018 Posted by | Astronomy, Books, Geology, Physics | Leave a comment

Oncology (II)

Here’s my first post in this series. Below some more quotes and links related to the book’s coverage.

Types of Pain
1. Nociceptive pain
a. Somatic pain: Cutaneous or musculoskeletal tissue (ie, bone, soft tissue metastases). Usually well-localized, increased w/use/movement.
b. Visceral pain: Compression, obstruction, infiltration, ischemia, stretching, or inflammation of solid & hollow viscera. Diffuse, nonfocal.
2. Neuropathic pain: Direct injury/dysfunction of peripheral or CNS tissues. Typically burning, radiating, may increase at rest or w/nerve stretching.
Pain emergencies: Pain crisis, spinal cord compression, fracture, bowel obstruction, severe mucositis, acute severe side effects of opioids (addiction crisis, delirium, respiratory depression), severe pain in imminently dying pt [patient, US]
Pain mgmt at the end of life is a moral obligation to alleviate pain & unnecessary suffering & is not euthanasia. (Vacco vs. Quill, U.S. Supreme Court, 1997)”

Nausea and Vomiting
Chemotherapy-induced N/V — 3 distinct types: Acute, delayed, & anticipatory. Acute begins 1–2 h after chemotherapy & peaks at 4–6 h, delayed begins at 24 h & peaks at 48–72 h, anticipatory is conditioned response to nausea a/w previous cycles of chemotherapy”

Constipation […] affects 50% of pts w/advanced CA; majority of pts being treated w/opioid analgesics, other contributors: malignant obstruction, ↓ PO/fluid intake, inactivity, anticholinergics, electrolyte derangement”

Fatigue
Prevalence/screening — occurs in up to 75% of all solid tumor pts & up to 99% of CA pts receiving multimodality Rx. Providers should screen for fatigue at initial visit, at dx [diagnosis, US] of advanced dz [disease] & w/each chemo visit; should assess for depression & insomnia w/new dx of fatigue (JCO 2008;23:3886) […] Several common 2° causes to eval & target include anemia (most common), thyroid or adrenal insufficiency, hypogonadism”

Delirium
*Definition — disturbances in level of consciousness, attention, cognition, and/or perception developing abruptly w/fluctuations over course of d *Clinical subtypes — hyperactive, hypoactive, & mixed […] *Maximize nonpharm intervention prior to pharmacology […] *Use of antipsychotics should be geared toward short-term use for acute sx [symptoms, US] *Benzodiazepines should only be initiated for delirium as an adjunct to antipsychotics in setting of agitation despite adequate antipsychotic dosing (JCO 2011;30:1206)”

Cancer Survivorship
Overview *W/improvement in dx & tx of CA, there are millions of CA survivors, & this number is increasing
*Pts experience the normal issues of aging, w/c are compounded by the long-term effects of CA & CA tx
*CA survivors are at ↑ risk of developing morbidity & illnesses at younger age than general population due to their CA tx […] ~312,570 male & ~396,080 female CA survivors <40 y of age (Cancer Treatment and Survivorship Facts and Figures 2016–2017, ACS) *Fertility is an important issue for survivors & there is considerable concern about the possibility of impairment (Human Reproduction Update 2009;15:587)”

“Pts undergoing cancer tx are at ↑ risk for infxn [infection, US] due to disease itself or its therapies. […] *Epidemiology: 10–50% of pts w/ solid tumors & >80% of pts with hematologic tumors *Source of infxn evident in only 20–30% of febrile episodes *If identified, common sites of infxn include blood, lungs, skin, & GI tract *Regardless of microbiologic diagnosis, Rx should be started within 2 h of fever onset which improves outcomes […] [Infections in the transplant host is the] Primary cause of death in 8% of auto-HCT & up to 20% of allo-HCT recipients” [here’s a relevant link, US].

Localized prostate cancer
*Epidemiology Incidence: ~180000, most common non-skin CA (2016: U.S. est.) (CA Cancer J Clin 2016:66:7) *Annual Mortality: ~26000, 2nd highest cause of cancer death in men (2016: U.S. est) […] Mortality benefit from screening asx [asymptomatic, US] men has not been definitively established, & individualized discussion of potential benefits & harms should occur before PSA testing is offered. […] Gleason grade reflects growth/differentiation pattern & ranges from 1–5, from most to least differentiated. […] Historical (pre-PSA) 15-y prostate CA mortality risk for conservatively managed (no surgery or RT) localized Gleason 6: 18–30%, Gleason 7: 42–70%, Gleason 8–10: 60–87% (JAMA 1998:280:975)”

Bladder cancer […] Most common malignancy of the urinary system, ~77000 Pts will be diagnosed in the US in 2016, ~16000 will die of their dz. […] Presenting sx: Painless hematuria (typically intermittent & gross), irritative voiding sx (frequency, urgency, dysuria), flank or suprapubic pain (symptomatic of locally advanced dz), constitutional sx (fatigue, wt loss, failure to thrive) usually symptomatic of met [metastatic, US] dz

Links:

WHO analgesia ladder. (But see also this – US).
Renal cell carcinoma (“~63000 new cases & ~1400 deaths in the USA in 2016 […] Median age dx 64, more prevalent in men”)
Germ cell tumour (“~8720 new cases of testicular CA in the US in 2016 […] GCT is the most common CA in men ages of 15 to 35 y/o”)
Non-small-cell lung carcinoma (“225K annual cases w/ 160K US deaths, #1 cause of cancer mortality; 70% stage III/IV *Cigarette smoking: 85% of all cases, ↑ w/ intensity & duration of smoking”)
Small-cell lung cancer. (“SCLC accounts for 13–15% of all lung CAs, seen almost exclusively in smokers, majority w/ extensive stage dz at dx (60–70%). Lambert–Eaton myasthenic syndrome (“Affects 3% of SCLC pts”).
Thymoma. Myasthenia gravis. Morvan’s syndrome. Masaoka-Koga Staging system.
Pleural mesothelioma (“Rare; ≅3000 new cases dx annually in US. Commonly develops in the 5th to 7th decade […] About 80% are a/w asbestos exposure. […] Develops decades after asbestos exposure, averaging 30–40 years […] Median survival: 10 mo. […] Screening has not been shown to ↓ mortality even in subjects w/ asbestos exposure”)
Hepatocellular Carcinoma (HCC). (“*6th most common CA worldwide (626,000/y) & 2nd leading cause of worldwide CA mortality (598,000/y) *>80% cases of HCC occur in sub-Saharan Africa, eastern & southeastern Asia, & parts of Oceania including Papua New Guinea *9th leading cause of CA mortality in US […] Viral hepatitis: HBV & HCV are the leading RFs for HCC & accounts for 75% cases worldwide […] While HCV is now the leading cause of HCC in the US, NASH is expected to become a risk factor of increasing importance in the next decade”). Milan criteria.
CholangiocarcinomaKlatskin tumor. Gallbladder cancer. Courvoisier’s sign.
Pancreatic cancer (Incidence: estimated ~53,070 new cases/y & ~42,780 D/y in US (NCI SEER); 4th most common cause of CA death in US men & women; estimated to be 2nd leading cause of CA-related mortality by 2020″). Trousseau sign of malignancy. Whipple procedure.

October 21, 2018 Posted by | Books, Cancer/oncology, Gastroenterology, Medicine, Nephrology, Neurology, Psychiatry | Leave a comment

Perception (I)

Here’s my short goodreads review of the book. In this post I’ll include some observations and links related to the first half of the book’s coverage.

“Since the 1960s, there have been many attempts to model the perceptual processes using computer algorithms, and the most influential figure of the last forty years has been David Marr, working at MIT. […] Marr and his colleagues were responsible for developing detailed algorithms for extracting (i) low-level information about the location of contours in the visual image, (ii) the motion of those contours, and (iii) the 3-D structure of objects in the world from binocular disparities and optic flow. In addition, one of his lasting achievements was to encourage researchers to be more rigorous in the way that perceptual tasks are described, analysed, and formulated and to use computer models to test the predictions of those models against human performance. […] Over the past fifteen years, many researchers in the field of perception have characterized perception as a Bayesian process […] According to Bayesian theory, what we perceive is a consequence of probabilistic processes that depend on the likelihood of certain events occurring in the particular world we live in. Moreover, most Bayesian models of perceptual processes assume that there is noise in the sensory signals and the amount of noise affects the reliability of those signals – the more noise, the less reliable the signal. Over the past fifteen years, Bayes theory has been used extensively to model the interaction between different discrepant cues, such as binocular disparity and texture gradients to specify the slant of an inclined surface.”

“All surfaces have the property of reflectance — that is, the extent to which they reflect (rather than absorb) the incident illumination — and those reflectances can vary between 0 per cent and 100 per cent. Surfaces can also be selective in the particular wavelengths they reflect or absorb. Our colour vision depends on these selective reflectance properties […]. Reflectance characteristics describe the physical properties of surfaces. The lightness of a surface refers to a perceptual judgement of a surface’s reflectance characteristic — whether it appears as black or white or some grey level in between. Note that we are talking about the perception of lightness — rather than brightness — which refers to our estimate of how much light is coming from a particular surface or is emitted by a source of illumination. The perception of surface lightness is one of the most fundamental perceptual abilities because it allows us not only to differentiate one surface from another but also to identify the real-world properties of a particular surface. Many textbooks start with the observation that lightness perception is a difficult task because the amount of light reflected from a particular surface depends on both the reflectance characteristic of the surface and the intensity of the incident illumination. For example, a piece of black paper under high illumination will reflect back more light to the eye than a piece of white paper under dim illumination. As a consequence, lightness constancy — the ability to correctly judge the lightness of a surface under different illumination conditions — is often considered to be an ‘achievement’ of the perceptual system. […] The alternative starting point for understanding lightness perception is to ask whether there is something that remains constant or invariant in the patterns of light reaching the eye with changes of illumination. In this case, it is the relative amount of light reflected off different surfaces. Consider two surfaces that have different reflectances—two shades of grey. The actual amount of light reflected off each of the surfaces will vary with changes in the illumination but the relative amount of light reflected off the two surfaces remains the same. This shows that lightness perception is necessarily a spatial task and hence a task that cannot be solved by considering one particular surface alone. Note that the relative amount of light reflected off different surfaces does not tell us about the absolute lightnesses of different surfaces—only their relative lightnesses […] Can our perception of lightness be fooled? Yes, of course it can and the ways in which we make mistakes in our perception of the lightnesses of surfaces can tell us much about the characteristics of the underlying processes.”

“From a survival point of view, the ability to differentiate objects and surfaces in the world by their ‘colours’ (spectral reflectance characteristics) can be extremely useful […] Most species of mammals, birds, fish, and insects possess several different types of receptor, each of which has a a different spectral sensitivity function […] having two types of receptor with different spectral sensitivities is the minimum necessary for colour vision. This is referred to as dicromacy and the majority of mammals are dichromats with the exception of the old world monkeys and humans. […] The only difference between lightness and colour perception is that in the latter case we have to consider the way a surface selectively reflects (and absorbs) different wavelengths, rather than just a surface’s average reflectance over all wavelengths. […] The similarities between the tasks of extracting lightness and colour information mean that we can ask a similar question about colour perception [as we did about lightness perception] – what is the invariant information that could specify the reflectance characteristic of a surface? […] The information that is invariant under changes of spectral illumination is the relative amounts of long, medium, and short wavelength light reaching our eyes from different surfaces in the scene. […] the successful identification and discrimination of coloured surfaces is dependent on making spatial comparisons between the amounts of short, medium, and long wavelength light reaching our eyes from different surfaces. As with lightness perception, colour perception is necessarily a spatial task. It follows that if a scene is illuminated by the light of just a single wavelength, the appropriate spatial comparisons cannot be made. This can be demonstrated by illuminating a real-world scene containing many different coloured objects with yellow, sodium light that contains only a single wavelength. All objects, whatever their ‘colours’, will only reflect back to the eye different intensities of that sodium light and hence there will only be absolute but no relative differences between the short, medium, and long wavelength lightness records. There is a similar, but less dramatic, effect on our perception of colour when the spectral characteristics of the illumination are restricted to just a few wavelengths, as is the case with fluorescent lighting.”

“Consider a single receptor mechanism, such as a rod receptor in the human visual system, that responds to a limited range of wavelengths—referred to as the receptor’s spectral sensitivity function […]. This hypothetical receptor is more sensitive to some wavelengths (around 550 nm) than others and we might be tempted to think that a single type of receptor could provide information about the wavelength of the light reaching the receptor. This is not the case, however, because an increase or decrease in the response of that receptor could be due to either a change in the wavelength or an increase or decrease in the amount of light reaching the receptor. In other words, the output of a given receptor or receptor type perfectly confounds changes in wavelength with changes in intensity because it has only one way of responding — that is, more or less. This is Rushton’s Principle of Univariance — there is only one way of varying or one degree of freedom. […] On the other hand, if we consider a visual system with two different receptor types, one more sensitive to longer wavelengths (L) and the other more sensitive to shorter wavelengths (S), there are two degrees of freedom in the system and thus the possibility of signalling our two independent variables — wavelength and intensity […] it is quite possible to have a colour visual system that is based on just two receptor types. Such a colour visual system is referred to as dichromatic.”

“So why is the human visual system trichromatic? The answer can be found in a phenomenon known as metamerism. So far, we have restricted our discussion to the effect of a single wavelength on our dichromatic visual system: for example, a single wavelength of around 550 nm that stimulated both the long and short receptor types about equally […]. But what would happen if we stimulated our dichromatic system with light of two different wavelengths at the same time — one long wavelength and one short wavelength? With a suitable choice of wavelengths, this combination of wavelengths would also have the effect of stimulating the two receptor types about equally […] As a consequence, the output of the system […] with this particular mixture of wavelengths would be indistinguishable from that created by the single wavelength of 550 nm. These two indistinguishable stimulus situations are referred to as metamers and a little thought shows that there would be many thousands of combinations of wavelength that produce the same activity […] in a dichromatic visual system. As a consequence, all these different combinations of wavelengths would be indistinguishable to a dichromatic observer, even though they were produced by very different combinations of wavelengths. […] Is there any way of avoiding the problem of metamerism? The answer is no but we can make things better. If a visual system had three receptor types rather than two, then many of the combinations of wavelengths that produce an identical pattern of activity in two of the mechanisms (L and S) would create a different amount of activity in our third receptor type (M) that is maximally sensitive to medium wavelengths. Hence the number of indistinguishable metameric matches would be significantly reduced but they would never be eliminated. Using the same logic, it follows that a further increase in the number of receptor types (beyond three) would reduce the problem of metamerism even more […]. There would, however, also be a cost. Having more distinct receptor types in a finite-sized retina would increase the average spacing between the receptors of the same type and thus make our acuity for fine detail significantly poorer. There are many species, such as dragonflies, with more than three receptor types in their eyes but the larger number of receptor types typically serves to increase the range of wavelengths to which the animal is sensitive into the infra-red or ultra-violet parts of the spectrum, rather than to reduce the number of metamers. […] the sensitivity of the short wavelength receptors in the human eye only extends to ~540 nm — the S receptors are insensitive to longer wavelengths. This means that human colour vision is effectively dichromatic for combinations of wavelengths above 540 nm. In addition, there are no short wavelength cones in the central fovea of the human retina, which means that we are also dichromatic in the central part of our visual field. The fact that we are unaware of this lack of colour vision is probably due to the fact that our eyes are constantly moving. […] It is […] important to appreciate that the description of the human colour visual system as trichromatic is not a description of the number of different receptor types in the retina – it is a property of the whole visual system.”

“Recent research has shown that although the majority of humans are trichromatic there can be significant differences in the precise matches that individuals make when matching colour patches […] the absence of one receptor type will result in a greater number of colour confusions than normal and this does have a significant effect on an observer’s colour vision. Protanopia is the absence of long wavelength receptors, deuteranopia the absence of medium wavelength receptors, and tritanopia the absence of short wavelength receptors. These three conditions are often described as ‘colour blindness’ but this is a misnomer. We are all colour blind to some extent because we all suffer from colour metamerism and fail to make discriminations that would be very apparent to any biological or machine vision system with a greater number of receptor types. For example, most stomatopod crustaceans (mantis shrimps) have twelve different visual pigments and they also have the ability to detect both linear and circularly polarized light. What I find interesting is that we believe, as trichromats, that we have the ability to discriminate all the possible shades of colour (reflectance characteristics) that exist in our world. […] we are typically unaware of the limitations of our visual systems because we have no way of comparing what we see normally with what would be seen by a ‘better’ visual system.”

“We take it for granted that we are able to segregate the visual input into separate objects and distinguish objects from their backgrounds and we rarely make mistakes except under impoverished conditions. How is this possible? In many cases, the boundaries of objects are defined by changes of luminance and colour and these changes allow us to separate or segregate an object from its background. But luminance and colour changes are also present in the textured surfaces of many objects and therefore we need to ask how it is that our visual system does not mistake these luminance and colour changes for the boundaries of objects. One answer is that object boundaries have special characteristics. In our world, most objects and surfaces are opaque and hence they occlude (cover) the surface of the background. As a consequence, the contours of the background surface typically end—they are ‘terminated’—at the boundary of the occluding object or surface. Quite often, the occluded contours of the background are also revealed at the opposite side of the occluding surface because they are physically continuous. […] The impression of occlusion is enhanced if the occluded contours contain a range of different lengths, widths, and orientations. In the natural world, many animals use colour and texture to camouflage their boundaries as well as to fool potential predators about their identity. […] There is an additional source of information — relative motion — that can be used to segregate a visual scene into objects and their backgrounds and to break any camouflage that might exist in a static view. A moving, opaque object will progressively occlude and dis-occlude (reveal) the background surface so that even a well-camouflaged, moving animal will give away its location. Hence it is not surprising that a very common and successful strategy of many animals is to freeze in order not to be seen. Unless the predator has a sophisticated visual system to break the pattern or colour camouflage, the prey will remain invisible.”

Some links:

Perception.
Ames room. Inverse problem in optics.
Hermann von Helmholtz. Richard Gregory. Irvin Rock. James Gibson. David Marr. Ewald Hering.
Optical flow.
La dioptrique.
Necker cube. Rubin’s vase.
Perceptual constancy. Texture gradient.
Ambient optic array.
Affordance.
Luminance.
Checker shadow illusion.
Shape from shading/Photometric stereo.
Colour vision. Colour constancy. Retinex model.
Cognitive neuroscience of visual object recognition.
Motion perception.
Horace Barlow. Bernhard Hassenstein. Werner E. Reichardt. Sigmund Exner. Jan Evangelista Purkyně.
Phi phenomenon.
Motion aftereffect.
Induced motion.

October 14, 2018 Posted by | Biology, Books, Ophthalmology, Physics, Psychology | Leave a comment

Oncology (I)

I really disliked the ‘Pocket…’ part of this book, so I’ll sort of pretend to overlook this aspect also in my coverage of the book here. This’ll be a hard thing to do, given the way the book is written – I refer to my goodreads review for details, I’ll include only one illustrative quote from that review here:

“In terms of content, the book probably compares favourably with many significantly longer oncology texts (mainly, but certainly not only, because of the publication date). In terms of readability it compares unfavourably to an Egyptian translation of Alan Sokal’s 1996 article in Social Text, if it were translated by a 12-year old dyslexic girl.”

I don’t yet know in how much detail I’ll blog the book; this may end up being the only post about the book, or I may decide to post a longer sequence of posts. The book is hard to blog, which is an argument against covering it in detail – and also the reason why I haven’t already blogged it – but some of the content included in the book is really, really nice stuff to know, which is a strong argument in favour of covering at least some of the material here. The book has a lot of stuff, so regardless of the level of detail of my future coverage a lot of interesting stuff will of necessity have been left out.

My coverage below includes some observations and links related to the first 100 pages of the book.

“Understanding Radiation Response: The 4 Rs of Radiobiology
Repair of sublethal damage
Reassortment of cells w/in the cell cycle
Repopulation of cells during the course of radiotherapy
Reoxygenation of hypoxic cells […]

*Oxygen enhances DNA damage induced by free radicals, thereby facilitating the indirect action of IR [ionizing radiation, US] *Biologically equivalent dose can vary by a factor of 2–3 depending upon the presence or absence of oxygen (referred to as the oxygen enhancement ratio) *Poorly oxygenated postoperative beds frequently require higher doses of RT than preoperative RT [radiation therapy] […] Chemotherapy is frequently used sequentially or concurrently w/radiotherapy to maximize therapeutic benefit. This has improved pt outcomes although also a/w ↑ overall tox. […] [Many chemotherapeutic agents] show significant synergy with RT […] Mechanisms for synergy vary widely: Include cell cycle effects, hypoxic cell sensitization, & modulation of the DNA damage response”.

“Specific dose–volume relationships have been linked to the risk of late organ tox. […] *Dose, volume, underlying genetics, and age of the pt at the time of RT are critical determinants of the risk for 2° malignancy *The likelihood of 2° CA is correlated w/dose, but there is no threshold dose below which there is no additional risk of 2° malignancy *Latent period for radiation-induced solid tumors is generally between 10 and 60 y […]. Latent period for leukemias […] is shorter — peak between 5 & 7 y.”

“The immune system plays an important role in CA surveillance; Rx’s that modulate & amplify the immune system are referred to as immunotherapies […] tumors escape the immune system via loss of molecules on tumor cells important for immune activation […]; tumors can secrete immunosuppressing cytokines (IL-10 & TGF-β) & downregulate IFN-γ; in addition, tumors often express nonmutated self-Ag, w/c the immune system will, by definition, not react against; tumors can express molecules that inhibit T-cell function […] Ubiquitous CD47 (Don’t eat me signal) with ↑ expression on tumor cells mediates escape from phagocytosis. *Tumor microenvironment — immune cells are found in tumors, the exact composition of these cells has been a/w [associated with, US] pt outcomes; eg, high concentration of tumor-infiltrating lymphocytes (CD8+ cells) are a/w better outcomes & ↑ response to chemotherapy, Tregs & myeloid-derived suppressor cells are a/w worse outcomes, the exact role of Th17 in tumors is still being elucidated; the milieu of cytokines & chemokines also plays a role in outcome; some cytokines (VEGF, IL-1, IL-8) lead to endothelial cell proliferation, migration, & activation […] Expression of PD-L1 in tumor microenvironment can be indicator of improved likelihood of response to immune checkpoint blockade. […] Tumor mutational load correlates w/increased response to immunotherapy (NEJM; 2014;371:2189.).”

“Over 200 hereditary CA susceptibility syndromes, most are rare […]. Inherited CAs arise from highly penetrant germline mts [mutations, US]; “familial” CAss may be caused by interaction of low-penetrance genes, gene–environment interactions, or both. […] Genetic testing should be done based on individual’s probability of being a mt carrier & after careful discussion & informed consent”.

Pharmacogenetics: Effect of heritable genes on response to drugs. Study of single genes & interindividual differences in drug metabolizing enzymes. Pharmacogenomics: Effect of inherited & acquired genetic variation on drug response. Study of the functions & interactions of all genes in the genome & how the overall variability of drug response may be used to predict the right tx in individual pts & to design new drugs. Polymorphisms: Common variations in a DNA sequence that may lead to ↓ or ↑ activity of the encoded gene (SNP, micro- & minisatellites). SNPs: Single nucleotide polymorphisms that may cause an amino acid exchange in the encoded protein, account for >90% of genetic variation in the human genome.”

Tumor lysis syndrome [TLS] is an oncologic emergency caused by electrolyte abnormalities a/w spontaneous and/or tx-induced cell death that can be potentially fatal. […] 4 key electrolyte abnormalities 2° to excessive tumor/cell lysis: *Hyperkalemia *Hyperphosphatemia *Hypocalcemia *Hyperuricemia (2° to catabolism of nucleic acids) […] Common Malignancies Associated with a High Risk of Developing TLS in Adult Patients [include] *Acute leukemias [and] *High-grade lymphomas such as Burkitt lymphoma & DLBCL […] [Disease] characteristics a/w TLS risk: Rapidly progressive, chemosensitive, myelo- or lymphoproliferative [disease] […] [Patient] characteristics a/w TLS risk: *Baseline impaired renal function, oliguria, exposure to nephrotoxins, hyperuricemia *Volume depletion/inadequate hydration, acidic urine”.

Hypercalcemia [affects] ~10–30% of all pts w/malignancy […] Symptoms: Polyuria/polydipsia, intravascular volume depletion, AKI, lethargy, AMS [Altered Mental Status, US], rarely coma/seizures; N/V [nausea/vomiting, US] […] Osteolytic Bone Lesions [are seen in] ~20% of all hyperCa of malignancy […] [Treat] underlying malignancy, only way to effectively treat, all other tx are temporizing”.

“National Consensus Project definition: Palliative care means patient and family-centered care that optimizes quality of life by anticipating, preventing, and treating suffering. Palliative care throughout the continuum of illness involves addressing physical, intellectual, emotional, social, and spiritual needs to facilitate patient autonomy, access to information, and choice.” […] *Several RCTs have supported the integration of palliative care w/oncologic care, but specific interventions & models of care have varied. Expert panels at NCCN & ASCO recently reviewed the data to release evidence-based guidelines. *NCCN guidelines (2016): “Palliative care should be initiated by the primary oncology team and then augmented by collaboration with an interdisciplinary team of palliative care experts… All cancer patients should be screened for palliative care needs at their initial visit, at appropriate intervals, and as clinically indicated.” *ASCO guideline update (2016): “Inpatients and outpatients with advanced cancer should receive dedicated palliative care services, early in the disease course, concurrent with active tx. Referral of patients to interdisciplinary palliative care teams is optimal […] Essential Components of Palliative Care (ASCO) *Rapport & relationship building w/pts & family caregivers *Symptom, distress, & functional status mgmt (eg, pain, dyspnea, fatigue, sleep disturbance, mood, nausea, or constipation) *Exploration of understanding & education about illness & prognosis *Clarification of tx goals *Assessment & support of coping needs (eg, provision of dignity therapy) *Assistance w/medical decision making *Coordination w/other care providers *Provision of referrals to other care providers as indicated […] Useful Communication Tips *Use open-ended questions to elicit pt concerns *Clarify how much information the pt would like to know […] Focus on what can be done (not just what can’t be done) […] Remove the phrase “do everything” from your medical vocabulary […] Redefine hope by supporting realistic & achievable goals […] make empathy explicit”.

Some links:

Radiation therapy.
Brachytherapy.
External beam radiotherapy.
Image-guided radiation therapy.
Stereotactic Radiosurgery.
Total body irradiation.
Cancer stem cell.
Cell cycle.
Carcinogenesis. Oncogene. Tumor suppressor gene. Principles of Cancer Therapy: Oncogene and Non-oncogene Addiction.
Cowden syndrome. Peutz–Jeghers syndrome. Familial Atypical Multiple Mole Melanoma Syndrome. Li–Fraumeni syndrome. Lynch syndrome. Turcot syndrome. Muir–Torre syndrome. Von Hippel–Lindau disease. Gorlin syndrome. Werner syndrome. Birt–Hogg–Dubé syndrome. Neurofibromatosis type I. -ll- type 2.
Knudson hypothesis.
DNA sequencing.
Cytogenetics.
Fluorescence in situ hybridization.
CAR T Cell therapy.
Antimetabolite. Alkylating antineoplastic agentAntimicrotubule agents/mitotic inhibitors. Chemotherapeutic agentsTopoisomerase inhibitorMonoclonal antibodiesBisphosphonatesProteasome inhibitors. [The book covers all of these agents, and others I for one reason or another decided not to include, in great detail, listing many different types of agents and including notes on dosing, pharmacokinetics & pharmacodynamics, associated adverse events and drug interactions etc. These parts of the book were very interesting, but they are impossible to blog – US).
Syndrome of inappropriate antidiuretic hormone secretion.
Acute lactic acidosis (“Often seen w/liver mets or rapidly dividing heme malignancies […] High mortality despite aggressive tx [treatment]”).
Superior vena cava syndrome.

October 12, 2018 Posted by | Biology, Books, Cancer/oncology, Genetics, Immunology, Medicine, Pharmacology | Leave a comment

Principles of memory (II)

I have added a few more quotes from the book below:

Watkins and Watkins (1975, p. 443) noted that cue overload is “emerging as a general principle of memory” and defined it as follows: “The efficiency of a functional retrieval cue in effecting recall of an item declines as the number of items it subsumes increases.” As an analogy, think of a person’s name as a cue. If you know only one person named Katherine, the name by itself is an excellent cue when asked how Katherine is doing. However, if you also know Cathryn, Catherine, and Kathryn, then it is less useful in specifying which person is the focus of the question. More formally, a number of studies have shown experimentally that memory performance systematically decreases as the number of items associated with a particular retrieval cue increases […] In many situations, a decrease in memory performance can be attributed to cue overload. This may not be the ultimate explanation, as cue overload itself needs an explanation, but it does serve to link a variety of otherwise disparate findings together.”

Memory, like all other cognitive processes, is inherently constructive. Information from encoding and cues from retrieval, as well as generic information, are all exploited to construct a response to a cue. Work in several areas has long established that people will use whatever information is available to help reconstruct or build up a coherent memory of a story or an event […]. However, although these strategies can lead to successful and accurate remembering in some circumstances, the same processes can lead to distortion or even confabulation in others […]. There are a great many studies demonstrating the constructive and reconstructive nature of memory, and the literature is quite well known. […] it is clear that recall of events is deeply influenced by a tendency to reconstruct them using whatever information is relevant and to repair holes or fill in the gaps that are present in memory with likely substitutes. […] Given that memory is a reconstructive process, it should not be surprising to find that there is a large literature showing that people have difficulty distinguishing between memories of events that happened and memories of events that did not happen […]. In a typical reality monitoring experiment […], subjects are shown pictures of common objects. Every so often, instead of a picture, the subjects are shown the name of an object and are asked to create a mental image of the object. The test involves presenting a list of object names, and the subject is asked to judge whether they saw the item (i.e., judge the memory as “real”) or whether they saw the name of the object and only imagined seeing it (i.e., judge the memory as “imagined”). People are more likely to judge imagined events as real than real events as imagined. The likelihood that a memory will be judged as real rather than imagined depends upon the vividness of the memory in terms of its sensory quality, detail, plausibility, and coherence […]. What this means is that there is not a firm line between memories for real and imagined events: if an imagined event has enough qualitative features of a real event it is likely to be judged as real.”

“One hallmark of reconstructive processes is that in many circumstances they aid in memory retrieval because they rely on regularities in the world. If we know what usually happens in a given circumstance, we can use that information to fill in gaps that may be present in our memory for that episode. This will lead to a facilitation effect in some cases but will lead to errors in cases in which the most probable response is not the correct one. However, if we take this standpoint, we must predict that the errors that are made when using reconstructive processes will not be random; in fact, they will display a bias toward the most likely event. This sort of mechanism has been demonstrated many times in studies of schema-based representations […], and language production errors […] but less so in immediate recall. […] Each time an event is recalled, the memory is slightly different. Because of the interaction between encoding and retrieval, and because of the variations that occur between two different retrieval attempts, the resulting memories will always differ, even if only slightly.”

In this chapter we discuss the idea that a task or a process can be a “pure” measure of memory, without contamination from other hypothetical memory stores or structures, and without contributions from other processes. Our impurity principle states that tasks and processes are not pure, and therefore one cannot separate out the contributions of different memory stores by using tasks thought to tap only one system; one cannot count on subjects using only one process for a particular task […]. Our principle follows from previous arguments articulated by Kolers and Roediger (1984) and Crowder (1993), among others, that because every event recruits slightly different encoding and retrieval processes, there is no such thing as “pure” memory. […] The fundamental issue is the extent to which one can determine the contribution of a particular memory system or structure or process to performance on a particular memory task. There are numerous ways of assessing memory, and many different ways of classifying tasks. […] For example, if you are given a word fragment and asked to complete it with the first word that pops in your head, you are free to try a variety of strategies. […] Very different types of processing can be used by subjects even when given the same type of test or cue. People will use any and all processes to help them answer a question.”

“A free recall test typically provides little environmental support. A list of items is presented, and the subject is asked to recall which items were on the list. […] The experimenter simply says, “Recall the words that were on the list,” […] A typical recognition test provides more environmental support. Although a comparable list of items might have been presented, and although the subject is asked again about memory for an item in context, the subject is provided with a more specific cue, and knows exactly how many items to respond to. Some tests, such as word fragment completion and general knowledge questions, offer more environmental support. These tests provide more targeted cues, and often the cues are unique […] One common processing distinction involves the aspects of the stimulus that are focused on or are salient at encoding and retrieval: Subjects can focus more on an item’s physical appearance (data driven processing) or on an item’s meaning (conceptually driven processing […]). In general, performance on tasks such as free recall that offer little environmental support is better if the rememberer uses conceptual rather than perceptual processing at encoding. Although there is perceptual information available at encoding, there is no perceptual information provided at test so data-driven processes tend not to be appropriate. Typical recognition and cued-recall tests provide more specific cues, and as such, data-driven processing becomes more appropriate, but these tasks still require the subject to discriminate which items were presented in a particular specific context; this is often better accomplished using conceptually driven processing. […] In addition to distinctions between data driven and conceptually driven processing, another common distinction is between an automatic retrieval process, which is usually referred to as familiarity, and a nonautomatic process, usually called recollection […]. Additional distinctions abound. Our point is that very different types of processing can be used by subjects on a particular task, and that tasks can differ from one another on a variety of different dimensions. In short, people can potentially use almost any combination of processes on any particular task.”

Immediate serial recall is basically synonymous with memory span. In one the first reviews of this topic, Blankenship (1938, p. 2) noted that “memory span refers to the ability of an individual to reproduce immediately, after one presentation, a series of discrete stimuli in their original order.”3 The primary use of memory span was not so much to measure the capacity of a short-term memory system, but rather as a measure of intellectual abilities […]. Early on, however, it was recognized that memory span, whatever it was, varied as function of a large number of variables […], and could even be increased substantially by practice […]. Nonetheless, memory span became increasingly seen as a measure of the capacity of a short-term memory system that was distinct from long-term memory. Generally, most individuals can recall about 7 ± 2 items (Miller, 1956) or the number of items that can be pronounced in about 2 s (Baddeley, 1986) without making any mistakes. Does immediate serial recall (or memory span) measure the capacity of short-term (or working) memory? The currently available evidence suggests that it does not. […] The main difficulty in attempting to construct a “pure” measure of immediate memory capacity is that […] the influence of previously acquired knowledge is impossible to avoid. There are numerous contributions of long-term knowledge not only to memory span and immediate serial recall […] but to other short-term tasks as well […] Our impurity principle predicts that when distinctions are made between types of processing (e.g., conceptually driven versus data driven; familiarity versus recollection; automatic versus conceptual; item specific versus relational), each of those individual processes will not be pure measures of memory.”

“Over the past 20 years great strides have been made in noninvasive techniques for measuring brain activity. In particular, PET and fMRI studies have allowed us to obtain an on-line glimpse into the hemodynamic changes that occur in the brain as stimuli are being processed, memorized, manipulated, and recalled. However, many of these studies rely on subtractive logic that explicitly assumes that (a) there are different brain areas (structures) subserving different cognitive processes and (b) we can subtract out background or baseline activity and determine which areas are responsible for performing a particular task (or process) by itself. There have been some serious challenges to these underlying assumptions […]. A basic assumption is that there is some baseline activation that is present all of the time and that the baseline is built upon by adding more activation. Thus, when the baseline is subtracted out, what is left is a relatively pure measure of the brain areas that are active in completing the higher-level task. One assumption of this method is that adding a second component to the task does not affect the simple task. However, this assumption does not always hold true. […] Even if the additive factors logic were correct, these studies often assume that a task is a pure measure of one process or another. […] Again, the point is that humans will utilize whatever resources they can recruit in order to perform a task. Individuals using different retrieval strategies (e.g., visualization, verbalization, lax or strict decision criteria, etc.) show very different patterns of brain activation even when performing the same memory task (Miller & Van Horn, 2007). This makes it extremely dangerous to assume that any task is made up of purely one process. Even though many researchers involved in neuroimaging do not make task purity assumptions, these examples “illustrate the widespread practice in functional neuroimaging of interpreting activations only in terms of the particular cognitive function being investigated (Cabeza et al., 2003, p. 390).” […] We do not mean to suggest that these studies have no value — they clearly do add to our knowledge of how cognitive functioning works — but, instead, would like to urge more caution in the interpretation of localization studies, which are sometimes taken as showing that an activated area is where some unique process takes place.”

October 6, 2018 Posted by | Biology, Books, Psychology | Leave a comment

Circadian Rhythms (II)

Below I have added some more observations from the book, as well as some links of interest.

“Most circadian clocks make use of a sun-based mechanism as the primary synchronizing (entraining) signal to lock the internal day to the astronomical day. For the better part of four billion years, dawn and dusk has been the main zeitgeber that allows entrainment. Circadian clocks are not exactly 24 hours. So to prevent daily patterns of activity and rest from drifting (freerunning) over time, light acts rather like the winder on a mechanical watch. If the clock is a few minutes fast or slow, turning the winder sets the clock back to the correct time. Although light is the critical zeitgeber for much behaviour, and provides the overarching time signal for the circadian system of most organisms, it is important to stress that many, if not all cells within an organism possess the capacity to generate a circadian rhythm, and that these independent oscillators are regulated by a variety of different signals which, in turn, drive countless outputs […]. Colin Pittendrigh was one of the first to study entrainment, and what he found in Drosophila has been shown to be true across all organisms, including us. For example, if you keep Drosophila, or a mouse or bird, in constant darkness it will freerun. If you then expose the animal to a short pulse of light at different times the shifting (phase shifting) effects on the freerunning rhythm vary. Light pulses given when the clock ‘thinks’ it is daytime (subjective day) will have little effect on the clock. However, light falling during the first half of the subjective night causes the animal to delay the start of its activity the following day, while light exposure during the second half of the subjective night advances activity onset. Pittendrigh called this the ‘phase response curve’ […] Remarkably, the PRC of all organisms looks very similar, with light exposure around dusk and during the first half of the night causing a delay in activity the next day, while light during the second half of the night and around dawn generates an advance. The precise shape of the PRC varies between species. Some have large delays and small advances (typical of nocturnal species) while others have small delays and big advances (typical of diurnal species). Light at dawn and dusk pushes and pulls the freerunning rhythm towards an exactly 24-hour cycle. […] Light can act directly to modify behaviour. In nocturnal rodents such as mice, light encourages these animals to seek shelter, reduce activity, and even sleep, while in diurnal species light promotes alertness and vigilance. So circadian patterns of activity are not only entrained by dawn and dusk but also driven directly by light itself. This direct effect of light on activity has been called ‘masking’, and combines with the predictive action of the circadian system to restrict activity to that period of the light/dark cycle to which the organism has evolved and is optimally adapted.”

“[B]irds, reptiles, amphibians, and fish (but not mammals) have ‘extra-ocular’ photoreceptors located within the pineal complex, hypothalamus, and other areas of the brain, and like the invertebrates, eye loss in many cases has little impact upon the ability of these animals to entrain. […] Mammals are strikingly different from all other vertebrates as they possess photoreceptor cells only within their eyes. Eye loss in all groups of mammals […] abolishes the capacity of these animals to entrain their circadian rhytms to the light/dark cycle. But astonishingly, the visual cells of the retina – the rods and cones – are not required for the detection of the dawn/dusk signal. There exists a third class of photoreceptors within the eye […] Studies in the late 1990s by Russell Foster and his colleagues showed that mice lacking all their rod and cone photoreceptors could still regulate their circadian rhythms to light perfectly normally. But when their eyes were covered the ability to entrain was lost […] work on the rodless/coneless mouse, along with [other] studies […], clearly demonstrated that the mammalian retina contains a small population of photosensitive retinal ganglion cells or pRGCs, which comprise approximately 1-2 per cent of all retinal ganglion cells […] Ophthalmologists now appreciate that eye loss deprives us of both vision and a proper sense of time. Furthermore, genetic diseases that result in the loss of the rods and cones and cause visual blindness, often spare the pRGCs. Under these circumstances, individuals who have their eyes but are visually blind, yet possess functional pRGCs, need to be advised to seek out sufficient light to entrain their circadian system. The realization that the eye provides us with both our sense of space and our sense of time has redefined the diagnosis, treatment, and appreciation of human blindness.”

“But where is ‘the’ circadian clock of mammals? […] [Robert] Moore and [Irving] Zucker’s work pinpointed the SCN as the likely neural locus of the light-entrainable circadian pacemaker in mammals […] and a decade later this was confirmed by definitive experiments from Michael Menaker’s laboratory undertaken at the University of Virginia. […] These experiments established the SCN as the ‘master circadian pacemaker’ of mammals. […] There are around 20,000 or so neurons in the mouse SCN, but they are not identical. Some receive light information from the pRGCs and pass this information on to other SCN neurons, while others project to the thalamus and other regions of the brain, and collectively these neurons secrete more than one hundred different neurotransmitters, neuropeptides, cytokines, and growth factors. The SCN itself is composed of several regions or clusters of neurons, which have different jobs. Furthermore, there is considerable variability in the oscillations of the individual cells, ranging from 21.25 to 26.25 hours. Although the individual cells in the SCN have their own clockwork mechanisms with varying periods, the cell autonomous oscillations in neural activity are synchronized at the system level within the SCN, providing a coherent near 24-hour signal to the rest of the mammal. […] SCN neurons exhibit a circadian rhythm of spontaneous action potentials (SAPs), with higher frequency during the daytime than the night which in turn drives many rhythmic changes by alternating stimulatory and inhibitory inputs to the appropriate target neurons in the brain and neuroendocrine systems. […] The SCN projects directly to thirty-five brain regions, mostly located in the hypothalamus, and particularly those regions of the hypothalamus that regulate hormone release. Indeed, many pituitary hormones, such as cortisol, are under tight circadian control. Furthermore, the SCN regulates the activity of the autonomous nervous system, which in turn places multiple aspects of physiology, including the sensitivity of target tissues to hormonal signals, under circadian control. In addition to these direct neuronal connections, the SCN communicates to the rest of the body using diffusible chemical signals.”

“The SCN is the master clock in mammals but it is not the only clock. There are liver clocks, muscle clocks, pancreas clocks, adipose tissue clocks, and clocks of some sort in every organ and tissue examined to date. While lesioning of the SCN disrupts global behavioural rhythms such as locomotor activity, the disruption of clock function within just the liver or lung leads to circadian disorder that is confined to the target organ. In tissue culture, liver, heart, lung, skeletal muscle, and other organ tissues such as mammary glands express circadian rhythms, but these rhythms dampen and disappear after only a few cycles. This occurs because some individual clock cells lose rhythmicity, but more commonly because the individual cellular clocks become uncoupled from each other. The cells continue to tick, but all at different phases so that an overall 24-hour rhythm within the tissue or organ is lost. The discovery that virtually all cells of the body have clocks was one of the big surprises in circadian rhythms research. […] the SCN, entrained by pRGCs, acts as a pacemaker to coordinate, but not drive, the circadian activity of billions of individual peripheral circadian oscillators throughout the tissues and organs of the body. The signalling pathways used by the SCN to phase-entrain peripheral clocks are still uncertain, but we know that the SCN does not send out trillions of separate signals around the body that target specific cellular clocks. Rather there seems to be a limited number of neuronal and humoral signals which entrain peripheral clocks that in turn time their local physiology and gene expression.”

“As in Drosophilia […], the mouse clockwork also comprises three transcriptional-translational feedback loops with multiple interacting components. […] [T]he generation of a robust circadian rhythm that can be entrained by the environment is achieved via multiple elements, including the rate of transcription, translation, protein complex assembly, phosphorylation, other post-translation modification events, movement into the nucleus, transcriptional inhibition, and protein degradation. […] [A] complex arrangement is needed because from the moment a gene is switched on, transcription and translation usually takes two hours at most. As a result, substantial delays must be imposed at different stages to produce a near 24-hour oscillation. […] Although the molecular players may differ from Drosophilia and mice, and indeed even between different insects, the underlying principles apply across the spectrum of animal life. […] In fungi, plants, and cyanobacteria the clock genes are all different from each other and different again from the animal clock genes, suggesting that clocks evolved independently in the great evolutionary lineages of life on earth. Despite these differences, all these clocks are based upon a fundamental TTFL.”

“Circadian entrainment is surprisingly slow, taking several days to adjust to an advanced or delayed light/dark cycle. In most mammals, including jet-lagged humans, behavioural shifts are limited to approximately one hour (one time zone) per day. […] Changed levels of PER1 and PER2 act to shift the molecular clockwork, advancing the clock at dawn and delaying the clock at dusk. However, per mRNA and PER protein levels fall rapidly even if the animal remains exposed to light. As a result, the effects of light on the molecular clock are limited and entrainment is a gradual process requiring repeated shifting stimuli over multiple days. This phenomenon explains why we get jet lag: the clock cannot move immediately to a new dawn/dusk cycle because there is a ‘brake’ on the effects of light on the clock. […] The mechanism that provides this molecular brake is the production of SLK1 protein. […] Experiments on mice in which SLK1 has been suppressed show very rapid entrainment to simulated jet-lag.”

“We spend approximately 36 per cent of our entire lives asleep, and while asleep we do not eat, drink, or knowingly pass on our genes. This suggests that this aspect of our 24-hour behaviour provides us with something of huge value. If we are deprived of sleep, the sleep drive becomes so powerful that it can only be satisfied by sleep. […] Almost all life shows a 24-hour pattern of activity and rest, as we live on a planet that revolves once every 24 hours causing profound changes in light, temperature, and food availability. […] Life seems to have made an evolutionary ‘decision’ to be active at a specific part of the day/night cycle, and a species specialized to be active during the day will be far less effective at night. Conversely, nocturnal animals that are beautifully adapted to move around and hunt under dim or no light fail miserably during the day. […] no species can operate with the same effectiveness across the 24-hour light/dark environment. Species are adapted to a particular temporal niche just as they are to a physical niche. Activity at the wrong time often means death. […] Sleep may be the suspension of most physical activity, but a huge amount of essential physiology occurs during this time. Many diverse processes associated with the restoration and rebuilding of metabolic pathways are known to be up-regulated during sleep […] During sleep the body performs a broad range of essential ‘housekeeping’ functions without which performance and health during the active phase deteriorates rapidly. But these housekeeping functions would not be why sleep evolved in the first place. […] Evolution has allocated these key activities to the most appropriate time of day. […] In short, sleep has probably evolved as a species-specific response to a 24-hour world in which light, temperature, and food availability change dramatically. Sleep is a period of physical inactivity when individuals avoid movement within an environment to which they are poorly adapted, while using this time to undertake essential housekeeping functions demanded by their biology.”

“Sleep propensity in humans is closely correlated with the melatonin profile but this may be correlation and not causation. Indeed, individuals who do not produce melatonin (e.g. tetraplegic individuals, people on beta-blockers, or pinealectomized patients) still exhibit circadian sleep/wake rhythms with only very minor detectable changes. Another correlation between melatonin and sleep relates to levels of alertness. When melatonin is suppressed by light at night alertness levels increase, suggesting that melatonin and sleep propensity are directly connected. However, increases in alertness occur before a significant drop in blood melatonin. Furthermore, increased light during the day will also improve alertness when melatonin levels are already low. These findings suggest that melatonin is not a direct mediator of alertness and hence sleepiness. Taking synthetic melatonin or synthetic analogues of melatonin produces a mild sleepiness in about 70 per cent of people, especially when no natural melatonin is being released. The mechanism whereby melatonin produces mild sedation remains unclear.”

Links:

Teleost multiple tissue (tmt) opsin.
Melanopsin.
Suprachiasmatic nucleus.
Neuromedin S.
Food-entrainable circadian oscillators in the brain.
John Harrison. Seymour Benzer. Ronald Konopka. Jeffrey C. Hall. Michael Rosbash. Michael W. Young.
Circadian Oscillators: Around the Transcription-Translation Feedback Loop and on to Output.
Period (gene). Timeless (gene). CLOCK. Cycle (gene). Doubletime (gene). Cryptochrome. Vrille Gene.
Basic helix-loop-helix.
The clockwork orange Drosophila protein functions as both an activator and a repressor of clock gene expression.
RAR-related orphan receptor. RAR-related orphan receptor alpha.
BHLHE41.
The two-process model of sleep regulation: a reappraisal.

September 30, 2018 Posted by | Books, Genetics, Medicine, Molecular biology, Neurology, Ophthalmology | Leave a comment

Words

The words included in the post below were mostly words which I encountered while reading the books Personal Relationships, Circadian Rhythms, Quick Service, Principles of memory, Feet of Clay, The Reverse of the Medal, and The Letter of Marque.

Camouflet. Dissimulation. Nomological. Circumlocutory. EclosionPuissant. Esurient. Hisperic. Ambigram. Scotophilic. Millenarianism. Sonder. Pomology. Oogonium. Vole. Tippler. Autonoetic. Engraphy/engram. Armigerous. Gazunder/guzunder.

Frizzle. Matorral. SclerophyllXerophyte. Teratoma. Shallop. Quartan. Ablative. Prolative. Dispart. Ptarmigan. Starbolins. Idolatrous. Spoom. Cablet. Hostler. Chelonian. Omnium. Toper. Rectitude.

Marthambles. Combe. Holt. Stile. Plover. Andiron. Delf. Boreen. Thief-taker. Patten. Subvention. Hummum. Bustard. Lugger. Vainglory. Penetralia. Limicoline. Astragal. Fillebeg/filibeg. Voluptuous.

Civet. Moil. Impostume. Frowsty. Bob. Snuggery. Legation. Brindle. Epergne. Chough. Shoneen. Pilaff. Phaeton. Gentian. Poldavy. Grebe. Orotund. Panoply. Chiliad. Quiddity.

September 27, 2018 Posted by | Books, Language | Leave a comment

Principles of memory (I)

This book was interesting, but it was more interesting to me on account of the fact that it’s telling you a lot about what sort of memory research has taken place over the years, than it was interesting on account of the authors having presented a good model of how this stuff works. It’s the sort of book that makes you think.

I found the book challenging to blog, for a variety of reasons, but I’ve tried adding some observations of interest from the first four chapters of the coverage below.

“[I]n over 100 years of scientific research on memory, and nearly 50 years after the so-called cognitive revolution, we have nothing that really constitutes a widely accepted and frequently cited law of memory, and perhaps only one generally accepted principle.5 However, there are a plethora of effects, many of which have extensive literatures and hundreds of published empirical demonstrations. One reason for the lack of general laws and principles of memory might be that none exists. Tulving (1985a, p. 385), for example, has argued that “no profound generalizations can be made about memory as a whole,”because memory comprises many different systems and each system operates according to different principles. One can make “general statements about particular kinds of memory,” but one cannot make statements that would apply to all types of memory. […] Roediger (2008) also argues that no general principles of memory exist, but his reasoning and arguments are quite different. He reintroduces Jenkins’ (1979) tetrahedral model of memory, which views all memory experiments as comprising four factors: encoding conditions, retrieval conditions, subject variables, and events (materials and tasks). Using the tetrahedral model as a starting point, Roediger convincingly demonstrates that all of these variables can affect memory performance in different ways and that such complexity does not easily lend itself to a description using general principles. Because of the complexity of the interactions among these variables, Roediger suggests that “the most fundamental principle of learning and memory, perhaps its only sort of general law, is that in making any generalization about memory one must add that ‘it depends'” (p. 247). […]  Where we differ is that we think it possible to produce general principles of memory that take into account these factors. […] The purpose of this monograph is to propose seven principles of human memory that apply to all memory regardless of the type of information, type of processing, hypothetical system supporting memory, or time scale. Although these principles focus on the invariants and empirical regularities of memory, the reader should be forewarned that they are qualitative rather than quantitative, more like regularities in biology than principles of geometry. […] Few, if any, of our principles are novel, and the list is by no means complete. We certainly do not think that there are only seven principles of memory nor, when more principles are proposed, do we think that all seven of our principles will be among the most important.7″

“[T]he two most popular contemporary ways of looking at memory are the multiple systems view and the process (or proceduralist) view.1 Although these views are not always necessarily diametrically opposed […], their respective research programs are focused on different questions and search for different answers. The fundamental difference between structural and processing accounts of memory is whether different rules apply as a function of the way information is acquired, the type of material learned, and the time scale, or whether these can be explained using a single set of principles. […] Proponents of the systems view of memory suggest that memory is divided into multiple systems. Thus, their endeavor is focused on discovering and defining different systems and describing how they work. A “system” within this sort of framework is a structure that is anatomically and evolutionarily distinct from other memory systems and differs in its “methods of acquisition, representation and expression of knowledge” […] Using a variety of techniques, including neuropsychological and statistical methods, advocates of the multiple systems approach […] have identified five major memory systems: procedural memory, the perceptual representation system (PRS), semantic memory, primary or working memory, and episodic memory. […] In general, three criticisms are raised most often: The systems approach (a) has no criteria that produce exactly five different memory systems, (b) relies to a large extent on dissociations, and (c) has great difficulty accounting for the pattern of results observed at both ends of the life span. […] The multiple systems view […] lacks a principled and consistent set of criteria for delineating memory systems. Given the current state of affairs, it is not unthinkable to postulate 5 or 10 or 20 or even more different memory systems […]. Moreover, the specific memory systems that have been identified can be fractionated further, resulting in a situation in which the system is distributed in multiple brain locations, depending on the demands of the task at hand. […] The major strength of the systems view is usually taken to be its ability to account for data from amnesic patients […]. Those individuals seem to have specific deficits in episodic memory (recall and recognition) with very few, if any, deficits in semantic memory, procedural memory, or the PRS. […] [But on the other hand] age-related differences in memory do not follow the pattern predicted by the systems view.”

“From our point of view, asking where memory is “located” in the brain is like asking where running is located in the body. There are certainly parts of the body that are more important (the legs) or less important (the little fingers) in performing the task of running but, in the end, it is an activity that requires complex coordination among a great many body parts and muscle groups. To extend the analogy, looking for differences between memory systems is like looking for differences between running and walking. There certainly are many differences, but the main difference is that running requires more coordination among the different body parts and can be disrupted by small things (such as a corn on the toe) that may not interfere with walking at all. Are we to conclude, then, that running is located in the corn on your toe? […] although there is little doubt that more primitive functions such as low-level sensations can be organized in localized brain regions, it is likely that more complex cognitive functions, such as memory, are more accurately described by a dynamic coordination of distributed interconnected areas […]. This sort of approach implies that memory, per se, does not exist but, instead, “information … resides at the level of the large-scale network” (Bressler & Kelso, 2001, p. 33).”

“The processing view […] emphasizes encoding and retrieval processes instead of the system or location in which the memory might be stored. […] Processes, not structures, are what is fundamental. […] The major criticisms of the processing approaches parallel those that have been leveled at the systems view: (a) number of processes or components (instead of number of systems), (b) testability […], and (c) issues with a special population (amnesia rather than life span development). […] The major weakness of the processing view is the major strength of the systems view: patients diagnosed with amnesic syndrome. […] it is difficult to account for data showing a complete abolishment of episodic memory with no apparent effect on semantic memory, procedural memory, or the PRS without appealing to a separate memory store. […] We suggest that in the absence of a compelling reason to prefer the systems view over the processing view (or vice versa), it would be fruitful to consider memory from a functional perspective. We do not know how many memory systems there are or how to define what a memory system is. We do not know how many processes (or components of processing) there are or how to distinguish them. We do acknowledge that short-term memory and long-term memory seem to differ in some ways, as do episodic memory and semantic memory, but are they really fundamentally different? Both the systems approach and, to a lesser extent, the proceduralist approach emphasize differences. Our approach emphasizes similarities. We suggest that a search for general principles of memory, based on fundamental empirical regularities, can act as a spur to theory development and a reexamination of systems versus process theories of memory.”

Our first principle states that all memory is cue driven; without a cue, there can be no memory […]. By cue we mean a specific prompt or query, such as “Did you see this word on the previous list?” […] cues can also be nonverbal, such as odors […], emotions […], nonverbal sounds […], and images […], to name only a few. Although in many situations the person is fully aware that the cue is part of a memory test, this need not be the case. […] Computer simulation models of memory acknowledge the importance of cues by building them into the system; indeed, computer simulation models will not work unless there is a cue. In general, some input is provided to these models, and then a response is provided. The so-called global models of memory, SAM, TODAM, and MINERVA2, are all cue driven. […] it is hard to conceive of a computer model of memory that is not cue dependent, simply because the computer requires something to start the retrieval process. […] There is near unanimity in the view that memory is cue driven. The one area in which this view is contested concerns a particular form of memory that is characterized by highly restrictive capacity limitations.”

The most commonly cited principle of memory, according to [our] literature search […], is the encoding specificity principle […] Our version of this is called the encoding-retrieval principle [and] states that memory depends on the relation between the conditions at encoding and the conditions at retrieval. […] An appreciation for the importance of the encoding-retrieval interaction came about as the result of studies that examined the potency of various cues to elicit items from memory. A strong cue is a word that elicits a particular target word most of the time. For example, when most people hear the word bloom, the first word that pops into their head is often flower. A weak cue is a word that only rarely elicits a particular target. […] A reasonable prediction seems to be that strong cues should be better than weak cues for eliciting the correct item. However, this inference is not entirely correct because it fails to take into account the relationship between the encoding and retrieval conditions. […] the effectiveness of even a long-standing strong cue depends crucially on the processes that occurred at study and the cues available at test. This basic idea became the foundation of the transfer-appropriate processing framework. […] Taken literally, all that transfer-appropriate processing requires is that the processing done at encoding be appropriate given the processing that will be required at test; it permits processing that is identical and permits processing that is similar. It also, however, permits processing that is completely different as long as it is appropriate. […] many proponents of this view act as if the name were “transfer similar processing” and express the idea as requiring a “match” or “overlap” between study and test. […] However, just because increasing the match sometimes leads to enhanced retention does not mean that it is the match that is the critical variable. […] one can easily set up situations in which the degree of match is improved and memory retention is worse, or the degree of match is decreased and memory is better, or the degree of match is changed (either increased or decreased) and it has no effect on retention. Match, then, is simply not the critical variable in determining memory performance. […] The retrieval conditions include other possible responses, and these other items can affect performance. The most accurate description, then, is that it is the relation between encoding and retrieval that matters, not the degree of match or similarity. […] As Tulving (1983, p. 239) noted, the dynamic relation between encoding and retrieval conditions prohibits any statements that take the following forms:
1. “Items (events) of class X are easier to remember than items (events) of class Y.”
2. “Encoding operations of class X are more effective than encoding operations of class Y.”
3. “Retrieval cues of class X are more effective than retrieval cues of class Y.”
Absolute statements that do not specify both the encoding and the retrieval conditions are meaningless because an experimenter can easily
change some aspect of the encoding or retrieval conditions and greatly change the memory performance.”

In most areas of memory research, forgetting is seen as due to retrieval failure, often ascribed to some form of interference. There are, however, two areas of memory research that propose that forgetting is due to an intrinsic property of the memory trace, namely, decay. […] The two most common accounts view decay as either a mathematical convenience in a model, in which a parameter t is associated with time and leads to worse performance, or as some loss of information, in which it is unclear exactly what aspect of memory is decaying and what parts remain. In principle, a decay theory of memory could be proposed that is specific and testable, such as a process analogous to radioactive decay, in which it is understood precisely what is lost and what remains. Thus far, no such decay theory exists. Decay is posited as the forgetting mechanism in only two areas of memory research, sensory memory and short-term/working memory. […] One reason that time-based forgetting, such as decay, is often invoked is the common belief that short-term/working memory is immune to interference, especially proactive interference […]. This is simply not so. […] interference effects are readily observed in the short term. […] Decay predicts the same decrease for the same duration of distractor activity. Interference predicts differential effects depending on the presence or absence of interfering items. Numerous studies support the interference predictions and disconfirm predictions made on the basis of a decay view […] You might be tempted to say, yes, well, there are occasions in which the passage of time is either uncorrelated with or even negatively correlated with memory performance, but on average, you do worse with longer retention intervals. However, this confirms that the putative principle — the memorability of an event declines as the length of the storage interval increases — is not correct. […] One can make statements about the effects of absolute time, but only to the extent that one specifies both the conditions at encoding and those at retrieval. […] It is trivially easy to construct an experiment in which memory for an item does not change or even gets better the longer the retention interval. Here, we provide only eight examples, although there are numerous other examples; a more complete review and discussion are offered by Capaldi and Neath (1995) and Bjork (2001).”

September 22, 2018 Posted by | Books, Psychology | Leave a comment

Circadian Rhythms (I)

“Circadian rhythms are found in nearly every living thing on earth. They help organisms time their daily and seasonal activities so that they are synchronized to the external world and the predictable changes in the environment. These biological clocks provide a cross-cutting theme in biology and they are incredibly important. They influence everything, from the way growing sunflowers track the sun from east to west, to the migration timing of monarch butterflies, to the morning peaks in cardiac arrest in humans. […] Years of work underlie most scientific discoveries. Explaining these discoveries in a way that can be understood is not always easy. We have tried to keep the general reader in mind but in places perseverance on the part of the reader may be required. In the end we were guided by one of our reviewers, who said: ‘If you want to understand calculus you have to show the equations.’”

The above quote is from the book‘s foreword. I really liked this book and I was close to giving it five stars on goodreads. Below I have added some observations and links related to the first few chapters of the book’s coverage (as noted in my review on goodreads the second half of the book is somewhat technical, and I’ve not yet decided if I’ll be blogging that part of the book in much detail, if at all).

“There have been over a trillion dawns and dusks since life began some 3.8 billion years ago. […] This predictable daily solar cycle results in regular and profound changes in environmental light, temperature, and food availability as day follows night. Almost all life on earth, including humans, employs an internal biological timer to anticipate these daily changes. The possession of some form of clock permits organisms to optimize physiology and behaviour in advance of the varied demands of the day/night cycle. Organisms effectively ‘know’ the time of day. Such internally generated daily rhythms are called ‘circadian rhythms’ […] Circadian rhythms are embedded within the genomes of just about every plant, animal, fungus, algae, and even cyanobacteria […] Organisms that use circadian rhythms to anticipate the rotation of the earth are thought to have a major advantage over both their competitors and predators. For example, it takes about 20–30 minutes for the eyes of fish living among coral reefs to switch vision from the night to daytime state. A fish whose eyes are prepared in advance for the coming dawn can exploit the new environment immediately. The alternative would be to wait for the visual system to adapt and miss out on valuable activity time, or emerge into a world where it would be more difficult to avoid predators or catch prey until the eyes have adapted. Efficient use of time to maximize survival almost certainly provides a large selective advantage, and consequently all organisms seem to be led by such anticipation. A circadian clock also stops everything happening within an organism at the same time, ensuring that biological processes occur in the appropriate sequence or ‘temporal framework’. For cells to function properly they need the right materials in the right place at the right time. Thousands of genes have to be switched on and off in order and in harmony. […] All of these processes, and many others, take energy and all have to be timed to best effect by the millisecond, second, minute, day, and time of year. Without this internal temporal compartmentalization and its synchronization to the external environment our biology would be in chaos. […] However, to be biologically useful, these rhythms must be synchronized or entrained to the external environment, predominantly by the patterns of light produced by the earth’s rotation, but also by other rhythmic changes within the environment such as temperature, food availability, rainfall, and even predation. These entraining signals, or time-givers, are known as zeitgebers. The key point is that circadian rhythms are not driven by an external cycle but are generated internally, and then entrained so that they are synchronized to the external cycle.”

“It is worth emphasizing that the concept of an internal clock, as developed by Richter and Bünning, has been enormously powerful in furthering our understanding of biological processes in general, providing a link between our physiological understanding of homeostatic mechanisms, which try to maintain a constant internal environment despite unpredictable fluctuations in the external environment […], versus the circadian system which enables organisms to anticipate periodic changes in the external environment. The circadian system provides a predictive 24-hour baseline in physiological parameters, which is then either defended or temporarily overridden by homeostatic mechanisms that accommodate an acute environmental challenge. […] Zeitgebers and the entrainment pathway synchronize the internal day to the astronomical day, usually via the light/dark cycle, and multiple output rhythms in physiology and behaviour allow appropriately timed activity. The multitude of clocks within a multicellular organism can all potentially tick with a different phase angle […], but usually they are synchronized to each other and by a central pacemaker which is in turn entrained to the external world via appropriate zeitgebers. […] Most biological reactions vary greatly with temperature and show a Q10 temperature coefficient of about 2 […]. This means that the biological process or reaction rate doubles as a consequence of increasing the temperature by 10°C up to a maximum temperature at which the biological reaction stops. […] a 10°C temperature increase doubles muscle performance. By contrast, circadian rhythms exhibit a Q10 close to 1 […] Clocks without temperature compensation are useless. […] Although we know that circadian clocks show temperature compensation, and that this phenomenon is a conserved feature across all circadian rhythms, we have little idea how this is achieved.”

“The systematic study of circadian rhythms only really started in the 1950s, and the pioneering studies of Colin Pittendrigh brought coherence to this emerging new discipline. […] From [a] mass of emerging data, Pittendrigh had key insights and defined the essential properties of circadian rhythms across all life. Namely that: all circadian rhythms are endogenous and show near 24-hour rhythms in a biological process (biochemistry, physiology, or behaviour); they persist under constant conditions for several cycles; they are entrained to the astronomical day via synchronizing zeitgebers; and they show temperature compensation such that the period of the oscillation does not alter appreciably with changes in environmental temperature. Much of the research since the 1950s has been the translation of these formalisms into biological structures and processes, addressing such questions as: What is the clock and where is it located within the intracellular processes of the cell? How can a set of biochemical reactions produce a regular self-sustaining rhythm that persists under constant conditions and has a period of about 24 hours? How is this internal oscillation synchronized by zeitgebers such as light to the astronomical day? Why is the clock not altered by temperature, speeding up when the environment gets hotter and slowing down in the cold? How is the information of the near 24-hour rhythm communicated to the rest of the organism?”

“There have been hundreds of studies showing that a broad range of activities, both physical and cognitive, vary across the 24-hour day: tooth pain is lowest in the morning; proofreading is best performed in the evening; labour pains usually begin at night and most natural births occur in the early morning hours. The accuracy of short and long badminton serves is higher in the afternoon than in the morning and evening. Accuracy of first serves in tennis is better in the morning and afternoon than in the evening, although speed is higher in the evening than in the morning. Swimming velocity over 50 metres is higher in the evening than in the morning and afternoon. […] The majority of studies report that performance increases from morning to afternoon or evening. […] Typical ‘optimal’ times of day for physical or cognitive activity are gathered routinely from population studies […]. However, there is considerable individual variation. Peak performance will depend upon age, chronotype, time zone, and for behavioural tasks how many hours the participant has been awake when conducting the task, and even the nature of the task itself. As a general rule, the circadian modulation of cognitive functioning results in an improved performance over the day for younger adults, while in older subjects it deteriorates. […] On average the circadian rhythms of an individual in their late teens will be delayed by around two hours compared with an individual in their fifties. As a result the average teenager experiences considerable social jet lag, and asking a teenager to get up at 07.00 in the morning is the equivalent of asking a 50-year-old to get up at 05.00 in the morning.”

“Day versus night variations in blood pressure and heart rate are among the best-known circadian rhythms of physiology. In humans, there is a 24-hour variation in blood pressure with a sharp rise before awakening […]. Many cardiovascular events, such as sudden cardiac death, myocardial infarction, and stroke, display diurnal variations with an increased incidence between 06.00 and 12.00 in the morning. Both atrial and ventricular arrhythmias appear to exhibit circadian patterning as well, with a higher frequency during the day than at night. […] Myocardial infarction (MI) is two to three times more frequent in the morning than at night. In the early morning, the increased systolic blood pressure and heart rate results in an increased energy and oxygen demand by the heart, while the vascular tone of the coronary artery rises in the morning, resulting in a decreased coronary blood flow and oxygen supply. This mismatch between supply and demand underpins the high frequency of onset of MI. Plaque blockages are more likely to occur in the morning as platelet surface activation markers have a circadian pattern producing a peak of thrombus formation and platelet aggregation. The resulting hypercoagulability partially underlies the morning onset of MI.”

“A critical area where time of day matters to the individual is the optimum time to take medication, a branch of medicine that has been termed ‘chronotherapy’. Statins are a family of cholesterol-lowering drugs which inhibit HMGCR-reductase […] HMGCR is under circadian control and is highest at night. Hence those statins with a short half-life, such as simvastatin and lovastatin, are most effective when taken before bedtime. In another clinical domain entirely, recent studies have shown that anti-flu vaccinations given in the morning provoke a stronger immune response than those given in the afternoon. The idea of using chronotherapy to improve the efficacy of anti-cancer drugs has been around for the best part of 30 years. […] In experimental models more than thirty anti-cancer drugs have been found to vary in toxicity and efficacy by as much as 50 per cent as a function of time of administration. Although Lévi and others have shown the advantages to treating individual patients by different timing regimes, few hospitals have taken it up. One reason is that the best time to apply many of these treatments is late in the day or during the night, precisely when most hospitals lack the infrastructure and personnel to deliver such treatments.”

“Flying across multiple time zones and shift work has significant economic benefits, but the costs in terms of ill health are only now becoming clear. Sleep and circadian rhythm disruption (SCRD) is almost always associated with poor health. […] The impact of jet lag has long been known by elite athletes […] even when superbly fit individuals fly across time zones there is a very prolonged disturbance of circadian-driven rhythmic physiology. […] Horses also suffer from jet lag. […] Even bees can get jet lag. […] The misalignments that occur as a result of the occasional transmeridian flight are transient. Shift working represents a chronic misalignment. […] Nurses are one of the best-studied groups of night shift workers. Years of shift work in these individuals has been associated with a broad range of health problems including type II diabetes, gastrointestinal disorders, and even breast and colorectal cancers. Cancer risk increases with the number of years of shift work, the frequency of rotating work schedules, and the number of hours per week working at night [For people who are interested to know more about this, I previously covered a text devoted exclusively to these topics here and here.]. The correlations are so strong that shift work is now officially classified as ‘probably carcinogenic [Group 2A]’ by the World Health Organization. […] the partners and families of night shift workers need to be aware that mood swings, loss of empathy, and irritability are common features of working at night.”

“There are some seventy sleep disorders recognized by the medical community, of which four have been labelled as ‘circadian rhythm sleep disorders’ […] (1) Advanced sleep phase disorder (ASPD) […] is characterized by difficulty staying awake in the evening and difficulty staying asleep in the morning. Typically individuals go to bed and rise about three or more hours earlier than the societal norm. […] (2) Delayed sleep phase disorder (DSPD) is a far more frequent condition and is characterized by a 3-hour delay or more in sleep onset and offset and is a sleep pattern often found in some adolescents and young adults. […] ASPD and DSPD can be considered as pathological extremes of morning or evening preferences […] (3) Freerunning or non-24-hour sleep/wake rhythms occur in blind individuals who have either had their eyes completely removed or who have no neural connection from the retina to the brain. These people are not only visually blind but are also circadian blind. Because they have no means of detecting the synchronizing light signals they cannot reset their circadian rhythms, which freerun with a period of about 24 hours and 10 minutes. So, after six days, internal time is on average 1 hour behind environmental time. (4) Irregular sleep timing has been observed in individuals who lack a circadian clock as a result of a tumour in their anterior hypothalamus […]. Irregular sleep timing is [also] commonly found in older people suffering from dementia. It is an extremely important condition because one of the major factors in caring for those with dementia is the exhaustion of the carers which is often a consequence of the poor sleep patterns of those for whom they are caring. Various protocols have been attempted in nursing homes using increased light in the day areas and darkness in the bedrooms to try and consolidate sleep. Such approaches have been very successful in some individuals […] Although insomnia is the commonly used term to describe sleep disruption, technically insomnia is not a ‘circadian rhythm sleep disorder’ but rather a general term used to describe irregular or disrupted sleep. […] Insomnia is described as a ‘psychophysiological’ condition, in which mental and behavioural factors play predisposing, precipitating, and perpetuating roles. The factors include anxiety about sleep, maladaptive sleep habits, and the possibility of an underlying vulnerability in the sleep-regulating mechanism. […] Even normal ‘healthy ageing’ is associated with both circadian rhythm sleep disorders and insomnia. Both the generation and regulation of circadian rhythms have been shown to become less robust with age, with blunted amplitudes and abnormal phasing of key physiological processes such as core body temperature, metabolic processes, and hormone release. Part of the explanation may relate to a reduced light signal to the clock […]. In the elderly, the photoreceptors of the eye are often exposed to less light because of the development of cataracts and other age-related eye disease. Both these factors have been correlated with increased SCRD.”

“Circadian rhythm research has mushroomed in the past twenty years, and has provided a much greater understanding of the impact of both imposed and illness-related SCRD. We now appreciate that our increasingly 24/7 society and social disregard for biological time is having a major impact upon our health. Understanding has also been gained about the relationship between SCRD and a spectrum of different illnesses. SCRD in illness is not simply the inconvenience of being unable to sleep at an appropriate time but is an agent that exacerbates or causes serious health problems.”

Links:

Circadian rhythm.
Acrophase.
Phase (waves). Phase angle.
Jean-Jacques d’Ortous de Mairan.
Heliotropism.
Kymograph.
John Harrison.
Munich Chronotype Questionnaire.
Chronotype.
Seasonal affective disorder. Light therapy.
Parkinson’s disease. Multiple sclerosis.
Melatonin.

August 25, 2018 Posted by | Biology, Books, Cancer/oncology, Cardiology, Medicine | Leave a comment

Combinatorics (II)

I really liked this book. Below I have added some links and quotes related to the second half of the book’s coverage.

“An n × n magic square, or a magic square of order n, is a square array of numbers — usually (but not necessarily) the numbers from 1 to n2 — arranged in such a way that the sum of the numbers in each of the n rows, each of the n columns, or each of the two main diagonals is the same. A semi-magic square is a square array in which the sum of the numbers in each row or column, but not necessarily the diagonals, is the same. We note that if the entries are 1 to n2, then the sum of the numbers in the whole array is
1 + 2 + 3 + … + n2n2 (n2 + 1) / 2
on summing the arithmetic progression. Because the n rows and columns have the same ‘magic sum’, the numbers in each single row or column add up to (1/n)th of this, which is n (n2+1) / 2 […] An nn latin squareor a latin square of order n, is a square array with n symbols arranged so that each symbol appears just once in each row and column. […] Given a latin square, we can obtain others by rearranging the rows or the columns, or by permuting the symbols. For an n × n latin square with symbols 1, 2, … , n, we can thereby arrange that the numbers in the first row and the first column appear in order as 1, 2, … , n. Such a latin square is called normalized […] A familiar form of latin square is the sudoku puzzle […] How many n x n latin squares are there for a given order of n? The answer is known only for n ≤ 11. […] The number of normalized latin squares of order 11 has an impressive forty-eight digits.”

“A particular type of latin square is the cyclic square, where the symbols appear in the same cyclic order, moving one place to the left in each successive row, so that the entry at the beginning of each line appears at the end of the next one […] An extension of this idea is where the symbols move more places to the left in each successive row […] We can construct a latin square row by row from its first row, always taking care that no symbol appears twice in any column. […] An important concept […] is that of a set of orthogonal latin squares […] two n × n latin squares are orthogonal if, when superimposed, each of the n2 possible pairings of a symbol from each square appears exactly once. […] pairs of orthogonal latin squares are […] used in agricultural experiments. […] We can extend the idea of orthogonality beyond pairs […] A set of mutually orthogonal latin squares (sometimes abbreviated to MOLS) is a set of latin squares, any two of which are orthogonal […] Note that there can be at most n-1 MOLS of order n. […] A full set of MOLS is called a complete set […] We can ask the following question: For which values of n does there exist a complete set of n × n mutually orthogonal latin squares? As several authors have shown, a complete set exists whenever n is a prime number (other than 2) or a power of a prime […] In 1922, H. L. MacNeish generalized this result by observing that if n has prime factorization p, then the number of MOLS is at least min (p1a x p2b, … , pkz) – 1″.

“Consider the following [problem] involving comparisons between a number of varieties of a commodity: A consumer organization wishes to compare seven brands of detergent and arranges a number of tests. But since it may be uneconomic or inconvenient for each tester to compare all seven brands it is decided that each tester should compare just three brands. How should the trials be organized if each brand is to be tested the same number of times and each pair of brands is to be compared directly? […] A block design consists of a set of v varieties arranged into b blocks. […] [if we] further assume that each block contains the same number k of varieties, and each variety appears in the same number r of blocks […] [the design is] called [an] equireplicate design […] for every block design we have v x r = b x k. […] It would clearly be preferable if all pairs of varieties in a design were compared the same number of times […]. Such a design is called balanced, or a balanced incomplete-block design (often abbreviated to BIBD). The number of times that any two varieties are compared is usually denoted by λ […] In a balanced block design the parameters v, b, k, r, and λ are not independent […] [Rather it is the case that:] r x (k -1) = λ x (v – 1). […] The conditions v x r = b x k and r x (k -1) = λ x (v – 1) are both necessary for a design to be balanced, but they’re not sufficient since there are designs satisfying both conditions which are not balanced. Another necessary condition for a design to be balanced is v ≤ b, a result known as Fisher’s inequality […] A balanced design for which v = b, and therefore k = r, is called a symmetric design“.

“A block design with v varieties is resolvable if its blocks can be rearranged into subdesigns, called replicates, each of which contains every variety just once. [….] we define a finite projective plane to be an arrangement of a finite number of points and a finite number of lines with the properties that: [i] Any two points lie on exactly one line. [ii] Any two lines pass through exactly one point.
Note that this differs from our usual Euclidean geometry, where any two lines pass through exactly one point unless they’re parallel. Omitting these italicized words produces a completely different type of geometry from the one we’re used to, since there’s now a ‘duality’ or symmetry between points and lines, according to which any statement about points lying on lines gives rise to a statement about lines passing through points, and vice versa. […] We say that the finite projective plane has order n if each line contains n + 1 points. […] removing a single line from a projective plane of order n, and the n + 1 points on this line, gives a square pattern with n2 points and n2 + n lines where each line contains n points and each point lies on n + 1 lines. Such a diagram is called an affine plane of order n. […] This process is reversible. If we start with an affine plane of order n and add another line joined up appropriately, we get a projective plane of order n. […] Every finite projective plane gives rise to a symmetric balanced design. […] In general, a finite projective plane of order n, with n2 + n + 1 points and lines and with n + 1 points on each line and n + 1 lines through each point, gives rise to a balanced symmetric design with parameters v = b = n2 + n + 1, k = r = n + 1, and λ = 1. […] Every finite affine plane gives rise to a resolvable design. […] In general, an affine plane of order n, obtained by removing a line and n + 1 points from a projective plane of order n, gives rise to a resolvable design with parameters v = n2 , b = n2 + n , k = n , and r = n + 1. […] Every finite affine plane corresponds to a complete set of orthogonal latin squares.”

Links:

Regular polygon.
Polyhedron.
Internal and external angles.
Triangular tiling. Square tiling. Hexagonal tiling.
Semiregular tessellations.
Penrose tiling.
Platonic solid.
Euler’s polyhedron formula.
Prism (geometry). Antiprism.
Fullerene.
Geodesic dome.
Graph theory.
Complete graph. Complete bipartite graph. Cycle graph.
Degree (graph theory).
Handshaking lemma.
Ramsey theory.
Tree (graph theory).
Eulerian and Hamiltonian Graphs. Hamiltonian path.
Icosian game.
Knight’s tour problem.
Planar graph. Euler’s formula for plane graphs.
Kuratowski’s theorem.
Dual graph.
Lo Shu Square.
Melencolia I.
Euler’s Thirty-six officers problem.
Steiner triple system.
Partition (number theory).
Pentagonal number. Pentagonal number theorem.
Ramanujan’s congruences.

August 23, 2018 Posted by | Books, Mathematics, Statistics | Leave a comment

Personal Relationships… (III)

Some more observations from the book below:

Early research on team processes […] noted that for teams to be effective members must minimize “process losses” and maximize “process gains” — that is, identify ways the team can collectively perform at a level that exceeds the average potential of individual members. To do so, teams need to minimize interpersonal disruptions and maximize interpersonal facilitation among its members […] the prevailing view — backed by empirical findings […] — is that positive social exchanges lead to positive outcomes in teams, whereas negative social exchanges lead to negative outcomes in teams. However, this view may be challenged, in that positive exchanges can sometime lead to negative outcomes, whereas negative exchanges may sometime lead to positive outcomes. For example, research on groupthink (Janis, 1972) suggests that highly cohesive groups can make suboptimal decisions. That is, cohesion […] can lead to suboptimal group performance. As another example, under certain circumstances, negative behavior (e.g., verbal attacks or sabotage directed at another member) by one person in the team could lead to a series of positive exchanges in the team. Such subsequent positive exchanges may involve stronger bonding among other members in support of the targeted member, enforcement of more positive and cordial behavioral norms among members, or resolution of possible conflict between members that might have led to this particular negative exchange.”

“[T]here is […] clear merit in considering social exchanges in teams from a social network perspective. Doing so requires the integration of dyadic-level processes with team-level processes. Specifically, to capture the extent to which certain forms of social exchange networks in teams are formed (e.g., friendship, instrumental, or rather adversary ties), researchers must first consider the dyadic exchanges or ties between all members in the team. Doing so can help researchers identify the extent to which certain forms of ties or other social exchanges are dense in the team […] An important question […] is whether the level of social exchange density in the team might moderate the effects of social exchanges, much like social exchanges strength might strengthen the effects of social exchanges […]. For example, might teams with denser social support networks be able to better handle negative social exchanges in the team when such exchanges emerge? […] the effects of differences in centrality and subgroupings or fault lines may vary, depending on certain factors. Specifically, being more central within the team’s network of social exchange may mean that the more central member receives more support from more members, or, rather, that the more central member is engaged in more negative social exchanges with more members. Likewise, subgroupings or fault lines in the team may lead to negative consequences when they are associated with lack of critical communication among members but not when they reflect the correct form of communication network […] social exchange constructs are likely to exert stronger influences on individual team members when exchanges are more highly shared (and reflected in more dense networks). By the same token, individuals are more likely to react to social exchanges in their team when exchanges are directed at them from more team members”.

“[C]ustomer relationship management (CRM) has garnered growing interest from both research and practice communities in marketing. The purpose of CRM is “to efficiently and effectively increase the acquisition and retention of profitable customers by selectively initiating, building and maintaining appropriate relationships with them” […] Research has shown that successfully implemented CRM programs result in positive outcomes. In a recent meta-analysis, Palmatier, Dant, Grewal, and Evans (2006) found that investments in relationship marketing have a large, direct effect on seller objective performance. In addition, there has been ample research demonstrating that the effects of relationship marketing on outcomes are mediated by relational constructs that include trust […] and commitment […]. Combining these individual predictors by examining the effects of the global construct of relationship quality is also predictive of positive firm performance […] Meta-analytic findings suggest that CRM is more effective when relationships are built with an individual person rather than a selling firm […] Gutek (1995) proposed a typology of service delivery relationships with customers: encounters, pseudo-relationships, and relationships. […] service encounters usually consist of a solitary interaction between a customer and a service employee, with the expectation that they will not interact in the future. […] in a service encounter, customers do not identify with either the individual service employee with whom they interact or with the service organization. […] An alternate to the service encounter relationship is the pseudorelationship, which arises when a customer interacts with different individual service employees but usually (if not always) from the same service organization […] in pseudo-relationships, the customer identifies with the service of a particular service organization, not with an individual service employee. Finally, personal service relationships emerge when customers have repeated interactions with the same individual service provider […] We argue that the nature of these different types of service relationships […] will influence the types and levels of resources exchanged between the customer and the employee during the service interaction, which may further affect customer and employee outcomes from the service interaction.”

“According to social exchange theory, individuals form relationships and engage in social interactions as a means of obtaining needed resources […]. Within a social exchange relationship, individuals may exchange a variety of resources, both tangible and intangible. In the study of exchange relationships, the content of the exchange, or what resources are being exchanged, is often used as an indicator of the quality of the relationship. On the one hand, the greater the quality of resources exchanged, the better the quality of the relationship; on the other hand, the better the relationship, the more likely these resources are exchanged. Therefore, it is important to understand the specific resources exchanged between the service provider and the customer […] Ferris and colleagues (2009) proposed that several elements of a relationship develop because of social exchange: trust, respect, affect, and support. In an interaction between a service provider and a customer, most of the resources that are exchanged are non-economic in nature […]. Examples include smiling, making eye contact, and speaking in a rhythmic (non-monotone) vocal tone […]. Through these gestures, the service provider and the customer may demonstrate a positive affect toward each other. In addition, greeting courteously, listening attentively to customers, and providing assistance to address customer needs may show the service provider’s respect and support to the customer; likewise, providing necessary information, clarifying their needs and expectations, cooperating with the service provider by following proper instructions, and showing gratitude to the service provider may indicate customers’ respect and support to the service provider. Further, through placing confidence in the fairness and honesty of the customer and accuracy of the information the customer provides, the service provider offers the customer his or her trust; similarly, through placing confidence in the expertise and good intentions of the service provider, the customer offers his or her trust in the service provider’s competence and integrity. Some of the resources exchanged, particularly special treatment, between a service provider and a customer are of both economic and social value. For example, the customer may receive special discounts or priority service, which not only offers the customer economic benefits but also shows how much the service provider values and supports the customer. Similarly, a service provider who receives an extra big tip from a customer is not only better off economically but also gains a sense of recognition and esteem. The more these social resources of trust, respect, affect, and support, as well as special treatment, are mutually exchanged in the provider–customer interactions, the higher the quality of the service interaction for both parties involved. […] we argue that the potential for the exchange of resources […] depends on the nature of the service relationship. In other words, the quantity and quality of resources exchanged in discrete service encounters, pseudo-relationships, and personal service relationships are distinct.”

Though customer–employee exchanges can be highly rewarding for both parties, they can also “turn ugly,” […]. In fact, though negative interactions such as rudeness, verbal abuse, or harassment are rare, employees are more likely to report them from customers than from coworkers or supervisors […] customer–employee exchanges are more likely to involve negative treatment than exchanges with organizational insiders. […] Such negative exchanges result in emotional labor and employee burnout […], covert sabotage of services or goods, or, in atypical cases […] direct retaliation and withdrawal […] Employee–customer exchanges are characterized by a strong power differential […] customers can influence the employees’ desired resources, have more choice over whether to continue the relationship, and can act in negative ways with few consequences (Yagil, 2008) […] One common way to conceptualize the impact of negative customer–employee interactions is Hirschman’s (1970) Exit-Voice-loyalty model. Management can learn of customers’ dissatisfaction by their reduced loyalty, voice, or exit. […] Customers rarely, if ever, see themselves as the source of the problem; in contrast, employees are highly likely to see customers as the reason for a negative exchange […] when employees feel customers’ allocation of resources (e.g., tips, purchases) are not commensurate with the time or energy expended (i.e., distributive injustice) or interpersonal treatment of employees is unjustified or violates norms (i.e., interactional injustice), they feel anger and anxiety […] Given these strong emotional responses, emotional deviance is a possible outcome in the service exchange. Emotional deviance is when employees violate display rules by expressing their negative feelings […] To avoid emotional deviance, service providers engage in emotion regulation […]. In lab and field settings, perceived customer mistreatment is linked to “emotional labor,” specifically regulating emotions by faking or suppressing emotions […] Customer mistreatment — incivility as well as verbal abuse — is well linked to employee burnout, and this effect exists beyond other job stressors (e.g., time pressure, constraints) and beyond mistreatment from supervisors and coworkers”.

Though a customer may complain or yell at an employee in hopes of improving service, most evidence suggests the opposite occurs. First, service providers tend to withdraw from negative or deviant customers (e.g., avoiding eye contact or going to the back room[)] […] Engaging in withdrawal or other counterproductive work behaviors (CWBs) in response to mistreatment can actually reduce burnout […], but the behavior is likely to create another dissatisfied customer or two in the meantime. Second, mistreatment can also result in the employees reduced task performance in the service exchange. Stressful work events redirect attention toward sense making, even when mistreatment is fairly ambiguous or mild […] and thus reduce cognitive performance […]. Regulating those negative emotions also requires attentional resources, and both surface and deep acting reduce memory recall compared with expressing felt emotions […] Moreover, the more that service providers feel exhausted and burned out, the less positive their interpersonal performance […] Finally, perceived incivility or aggressive treatment from customers, and the resulting job dissatisfaction, is a key predictor of intentional customer-directed deviant behavior or service sabotage […] Dissatisfied employees engage in less extra-effort behavior than satisfied employees […]. More insidious, they may engage in intentionally deviant performance that is likely to be covert […] and thus difficult to detect and manage […] Examples of service sabotage include intentionally giving the customer faulty or damaged goods, slowing down service pace, or making “mistakes” in the service transaction, all of which are then linked to lower service performance from the customers’ perspective […]. This creates a feedback loop from employee behaviors to customer perceptions […] Typical human resource practices can help service management […], and practices such as good selection and providing training should reduce the likelihood of service failures and the resulting negative reactions from customers […]. Support from colleagues can help buffer the reactions to customer-instigated mistreatment. Individual perceptions of social support moderate the strain from emotional labor […], and formal interventions increasing individual or unit-level social support reduce strain from emotionally demanding interactions with the public (Le Blanc, Hox, Schaufeli, & Taris, 2007).”

August 19, 2018 Posted by | Books, Psychology | Leave a comment

Personal Relationships… (II)

Some more observations from the book below:

Coworker support, or the processes by which coworkers provide assistance with tasks, information, or empathy, has long been considered an important construct in the stress and strain literature […] Social support fits the conservation of resources theory definition of a resource, and it is commonly viewed in that light […]. Support from coworkers helps employees meet the demands of their job, thus making strain less likely […]. In a sense, social support is the currency upon which social exchanges are based. […] The personality of coworkers can play an important role in the development of positive coworker relationships. For example, there is ample evidence that suggests that those higher in conscientiousness and agreeableness are more likely to help coworkers […] Further, similarity in personality between coworkers (e.g., coworkers who are similar in their conscientiousness) draws coworkers together into closer relationships […] cross-sex relationships appear to be managed in a different manner than same-sex relationships. […] members of cross-sex friendships fear the misinterpretation of their relationship by those outside the relationship as a sexual relationship rather than platonic […] a key goal of partners in a cross-sex workplace friendship becomes convincing “third parties that the friendship is authentic.” As a result, cross-sex workplace friends will intentionally limit the intimacy of their communication or limit their non-work-related communication to situations perceived to demonstrate a nonsexual relationship, such as socializing with a cross-sex friend only in the presence of his or her spouse […] demographic dissimilarity in age and race can reduce the likelihood of positive coworker relationships. Chattopadhyay (1999) found that greater dissimilarity among group members on age and race were associated with less collegial relationships among coworkers, which was subsequently associated with less altruistic behavior […] Sias and Cahill (1998) found that a variety of situational characteristics, both inside and outside the workplace setting, helps to predict the development of workplace friendship. For example, they found that factors outside the workplace, such as shared outside interests (e.g., similar hobbies), life events (e.g., having a child), and the simple passing of time can lead to a greater likelihood of a friendship developing. Moreover, internal workplace characteristics, including working together on tasks, physical proximity within the office, a common problem or enemy, and significant amounts of “downtime” that allow for greater socialization, also support friendship development in the workplace (see also Fine, 1986).”

“To build knowledge, employees need to be willing to learn and try new things. Positive relationships are associated with a higher willingness to engage in learning and experimentation […] and, importantly, sharing of that new knowledge to benefit others […] Knowledge sharing is dependent on high-quality communication between relational partners […] Positive relationships are characterized by less defensive communication when relational partners provide feedback (e.g., a suggestion for a better way to accomplish a task; Roberts, 2007). In a coworker context, this would involve accepting help from coworkers without putting up barriers to that help (e.g., nonverbal cues that the help is not appreciated or welcome). […] A recent meta-analysis by Chiaburu and Harrison (2008) found that coworker support was associated with higher performance and higher organizational citizenship behavior (both directed at individuals and directed at the organization broadly). These relationships held whether performance was self- or supervisor related […] Chiaburu and Harrison (2008) also found that coworker support was associated with higher satisfaction and organizational commitment […] Positive coworker exchanges are also associated with lower levels of employee withdrawal, including absenteeism, intention to turnover, and actual turnover […]. To some extent, these relationships may result from norms within the workplace, as coworkers help to set standards for behavior and not “being there” for other coworkers, particularly in situations where the work is highly interdependent, may be considered a significant violation of social norms within a positive working environment […] Perhaps not surprisingly, given the proximity and the amount of time spent with coworkers, workplace friendships will occasionally develop into romances and, potentially, marriages. While still small, the literature on married coworkers suggests that they experience a number of benefits, including lower emotional exhaustion […] and more effective coping strategies […] Married coworkers are an interesting population to examine, largely because their work and family roles are so highly integrated […]. As a result, both resources and demands are more likely to spill over between the work and family role for married coworkers […] Janning and Neely (2006) found that married coworkers were more likely to talk about work-related issues while at home than married couples that had no work-related link.”

Negative exchanges [between coworkers] are characterized by behaviors that are generally undesirable, disrespectful, and harmful to the focal employee or employees. Scholars have found that these negative exchanges influence the same outcomes as positive, supporting exchanges, but in opposite directions. For instance, in their recent meta-analysis of 161 independent studies, Chiaburu and Harrison (2008) found that antagonistic coworker exchanges are negatively related to job satisfaction, organizational commitment, and task performance and positively related to absenteeism, intent to quit, turnover, and counterproductive work behaviors. Unfortunately, despite the recent popularity of the negative exchange research, this literature still lacks construct clarity and definitional precision. […] Because these behaviors have generally referred to acts that impact both coworkers and the organization as a whole, much of this work fails to distinguish social interactions targeting specific individuals within the organization from the nonsocial behaviors explicitly targeting the overall organization. This is unfortunate given that coworker-focused actions and organization-focused actions represent unique dimensions of organizational behavior […] negative exchanges are likely to be preceded by certain antecedents. […] Antecedents may stem from characteristics of the enactor, of the target, or of the context in which the behaviors occur. For example, to the extent that enactors are low on socially relevant personality traits such as agreeableness, emotional stability, or extraversion […], they may be more prone to initiate a negative exchange. Likewise, an enactor who is a high Machiavellian may initiate a negative exchange with the goal of gaining power or establishing control over the target. Antagonistic behaviors may also occur as reciprocation for a previous attack (real or imagined) or as a proactive deterrent against a potential future negative behavior from the target. Similarly, enactors may initiate antagonism based on their perceptions of a coworker’s behavioral characteristics such as suboptimal productivity or weak work ethic. […] The reward system can also play a role as an antecedent condition for antagonism. When coworkers are highly interdependent and receive rewards based on the performance of the group as opposed to each individual, the incidence of antagonism may increase when there is substantial variance in performance among coworkers.”

“[E]mpirical evidence suggests that some people have certain traits that make them more vulnerable to coworker attacks. For example, employees with low self-esteem, low emotional stability, high introversion, or high submissiveness are more inclined to be the recipients of negative coworker behaviors […]. Furthermore, research also shows that people who engage in negative behaviors are likely to also become the targets of these behaviors […] Two of the most commonly studied workplace attitudes are employee job satisfaction […] and affective organizational commitment […] Chiaburu and Harrison (2008) linked general coworker antagonism with both attitudes. Further, the specific behaviors of bullying and incivility have also been found to adversely affect both job satisfaction and organizational commitment […]. A variety of behavioral outcomes have also been identified as outcomes of coworker antagonism. Withdrawal behaviors such as absenteeism, intention to quit, turnover, effort reduction […] are typical responses […] those who have been targeted by aggression are more likely to engage in aggression. […] Feelings of anger, fear, and negative mood have also been shown to mediate the effects of interpersonal mistreatment on behaviors such as withdrawal and turnover […] [T]he combination of enactor and target characteristics is likely to play an antecedent role to these exchanges. For instance, research in the diversity area suggests that people tend to be more comfortable around those with whom they are similar and less comfortable around people with whom they are dissimilar […] there may be a greater incidence of coworker antagonism in more highly diverse settings than in settings characterized by less diversity. […] research has suggested that antagonistic behaviors, while harmful to the target or focal employee, may actually be beneficial to the enactor of the exchange. […] Krischer, Penney, and Hunter (2010) recently found that certain types of counterproductive work behaviors targeting the organization may actually provide employees with a coping mechanism that ultimately reduces their level of emotional exhaustion.”

CWB [counterproductive work behaviors] toward others is composed of volitional acts that harm people at work; in our discussion this would refer to coworkers. […] person-oriented organizational citizenship behaviors (OCB; Organ, 1988) consist of behaviors that help others in the workplace. This might include sharing job knowledge with a coworker or helping a coworker who had too much to do […] Social support is often divided into the two forms of emotional support that helps people deal with negative feelings in response to demanding situations versus instrumental support that provides tangible aid in directly dealing with work demands […] one might expect that instrumental social support would be more strongly related to positive exchanges and positive relationships. […] coworker social support […] has [however] been shown to relate to strains (burnout) in a meta-analysis (Halbesleben, 2006). […] Griffin et al. suggested that low levels of the Five Factor Model […] dimensions of agreeableness, emotional stability, and extraversion might all contribute to negative behaviors. Support can be found for the connection between two of these personality characteristics and CWB. […] Berry, Ones, and Sackett (2007) showed in their meta-analysis that person-focused CWB (they used the term deviance) had significant mean correlations of –.20 with emotional stability and –.36 with agreeableness […] there was a significant relationship with conscientiousness (r = –.19). Thus, agreeable, conscientious, and emotionally stable individuals are less likely to engage in CWB directed toward people and would be expected to have fewer negative exchanges and better relationships with coworkers. […] Halbesleben […] suggests that individuals high on the Five Factor Model […] dimensions of agreeableness and conscientiousness would have more positive exchanges because they are more likely to engage in helping behavior. […] a meta-analysis has shown that both of these personality variables relate to the altruism factor of OCB in the direction expected […]. Specifically, the mean correlations of OCB were .13 for agreeableness and .22 for conscientiousness. Thus, individuals high on these two personality dimensions should have more positive coworker exchanges.”

There is a long history of research in social psychology supporting the idea that people tend to be attracted to, bond, and form friendships with others they believe to be similar […], and this is true whether the similarity is rooted in demographics that are fairly easy to observe […] or in attitudes, beliefs, and values that are more difficult to observe […] Social network scholars refer to this phenomenon as homophily, or the notion that “similarity breeds connection” […] although evidence of homophily has been found to exist in many different types of relationships, including marriage, frequency of communication, and career support, it is perhaps most evident in the formation of friendships […] We extend this line of research and propose that, in a team context that provides opportunities for tie formation, greater levels of perceived similarity among team members will be positively associated with the number of friendship ties among team members. […] A chief function of friendship ties is to provide an outlet for individuals to disclose and manage emotions. […] friendship is understood as a form of support that is not related to work tasks directly; rather, it is a “backstage resource” that allows employees to cope with demands by creating distance between them and their work roles […]. Thus, we propose that friendship network ties will be especially important in providing the type of coping resources that should foster team member well-being. Unfortunately, however, friendship network ties negatively impact team members’ ability to focus on their work tasks, and, in turn, this detracts from taskwork. […] When friends discuss nonwork topics, these individuals will be distracted from work tasks and will be exposed to off-task information exchanged in informal relationships that is irrelevant for performing one’s job. Additionally, distractions can hinder individuals’ ability to become completely engaged in their work (Jett & George).”

Although teams are designed to meet important goals for both companies and their employees, not all team members work together well.
Teams are frequently “cruel to their members” […] through a variety of negative team member exchanges (NTMEs) including mobbing, bullying, incivility, social undermining, and sexual harassment. […] Team membership offers identity […], stability, and security — positive feelings that often elevate work teams to powerful positions in employees’ lives […], so that members are acutely aware of how their teammates treat them. […] NTMEs may evoke stronger emotional, attitudinal, and behavioral consequences than negative encounters with nonteam members. In brief, team members who are targeted for NTMEs are likely to experience profound threats to personal identity, security, and stability […] when a team member targets another for negative interpersonal treatment, the target is likely to perceive that the entire group is behind the attack rather than the specific instigator alone […] Studies have found that NTMEs […] are associated with poor psychological outcomes such as depression; undesirable work attitudes such as low affective commitment, job dissatisfaction, and low organization-based self-esteem; and counterproductive behaviors such as deviance, job withdrawal, and unethical behavior […] Some initial evidence has also indicated that perceptions of rejection mediate the effects of NTMEs on target outcomes […] Perceptions of the comparative treatment of other team members are an important factor in reactions to NTMEs […]. When targets perceive they are “singled out,” NTMEs will cause more pronounced effects […] A significant body of literature has suggested that individuals guide their own behaviors through environmental social cues that they glean from observing the norms and values of others. Thus, the negative effects of NTMEs may extend beyond the specific targets; NTMEs can spread contagiously to other team members […]. The more interdependent the social actors in the team setting, the stronger and more salient will be the social cues […] [There] is evidence that as team members see others enacting NTMEs, their inhibitions against such behaviors are lowered.”

August 13, 2018 Posted by | Books, Psychology | Leave a comment

Personal Relationships… (I)

“Across subdisciplines of psychology, research finds that positive, fulfilling, and satisfying relationships contribute to life satisfaction, psychological health, and physical well-being whereas negative, destructive, and unsatisfying relationships have a whole host of detrimental psychological and physical effects. This is because humans possess a fundamental “need to belong” […], characterized by the motivation to form and maintain lasting, positive, and significant relationships with others. The need to belong is fueled by frequent and pleasant relational exchanges with others and thwarted when one feels excluded, rejected, and hurt by others. […] This book uses research and theory on the need to belong as a foundation to explore how five different types of relationships influence employee attitudes, behaviors, and well-being. They include relationships with supervisors, coworkers, team members, customers, and individuals in one’s nonwork life. […] This book is written for a scientist–practitioner audience and targeted to both researchers and human resource management professionals. The contributors highlight both theoretical and practical implications in their respective chapters, with a common emphasis on how to create and sustain an organizational climate that values positive relationships and deters negative interpersonal experiences. Due to the breadth of topics covered in this edited volume, the book is also appropriate for advanced specialty undergraduate or graduate courses on I/O psychology, human resource management, and organizational behavior.”

The kind of stuff covered in books like this one relates closely to social stuff I lack knowledge about and/or is just not very good at handling. I don’t think too highly of this book’s coverage so far, but that’s at least partly due to the kinds of topics covered – it is what it is.

Below I have added some quotes from the first few chapters of the book.

“Work relationships are important to study in that they can exert a strong influence on employees’ attitudes and behaviors […].The research evidence is robust and consistent; positive relational interactions at work are associated with more favorable work attitudes, less work-related strain, and greater well-being (for reviews see Dutton & Ragins, 2007; Grant & Parker, 2009). On the other side of the social ledger, negative relational interactions at work induce greater strain reactions, create negative affective reactions, and reduce well-being […]. The relationship science literature is clear, social connection has a causal effect on individual health and well-being”.

“[One] way to view relationships is to consider the different dimensions by which relationships vary. An array of dimensions that underlie relationships has been proposed […] Affective tone reflects the degree of positive and negative feelings and emotions within the relationship […] Relationships and groups marked by greater positive affective tone convey more enthusiasm, excitement, and elation for each other, while relationships consisting of more negative affective tone express more fear, distress, and scorn. […] Emotional carrying capacity refers to the extent that the relationship can handle the expression of a full range of negative and position emotions as well as the quantity of emotion expressed […]. High-quality relationships have the ability to withstand the expression of more emotion and a greater variety of emotion […] Interdependence involves ongoing chains of mutual influence between two people […]. Degree of relationship interdependency is reflected through frequency, strength, and span of influence. […] A high degree of interdependence is commonly thought to be one of the hallmarks of a close relationship […] Intimacy is composed of two fundamental components: self-disclosure and partner responsiveness […]. Responsiveness involves the extent that relationship partners understand, validate, and care for one another. Disclosure refers to verbal communications of personally relevant information, thoughts, and feelings. Divulging more emotionally charged information of a highly personal nature is associated with greater intimacy […]. Disclosure tends to proceed from the superficial to the more intimate and expands in breadth over time […] Power refers to the degree that dominance shapes the relationship […] relationships marked by a power differential are more likely to involve unidirectional interactions. Equivalent power tends to facilitate bidirectional exchanges […] Tensility is the extent that the relationship can bend and endure strain in the face of challenges and setbacks […]. Relationship tensility contributes to psychological safety within the relationship. […] Trust is the belief that relationship partners can be depended upon and care about their partner’s needs and interests […] Relationships that include a great deal of trust are stronger and more resilient. A breach of trust can be one of the most difficult relationships challenges to overcome (Pratt & dirks, 2007).”

“Relationships are separate entities from the individuals involved in the relationships. The relationship unit (typically a dyad) operates at a different level of analysis from the individual unit. […] For those who conduct research on groups or organizations, it is clear that operations at a group level […] operate at a different level than individual psychology, and it is not merely the aggregate of the individuals involved in the relationship. […] operations at one level (e.g., relationships) can influence behavior at the other level (e.g., individual). […] relationships are best thought of as existing at their own level of analysis, but one that interacts with other levels of analysis, such as individual and group or cultural levels. Relationships cannot be reduced to the actions of the individuals in them or the social structures where they reside but instead interact with the individual and group processes in interesting ways to produce behaviors. […] it is challenging to assess causality via experimental procedures when studying relationships. […] Experimental procedures are crucial for making inferences of causation but are particularly difficult in the case of relationships because it is tough to manipulate many important relationships (e.g., love, marriage, sibling relationships). […] relationships are difficult to observe at the very beginning and at the end, so methods have been developed to facilitate this.”

“[T]he organizational research could […] benefit from the use of theoretical models from the broader relationships literature. […] Interdependence theory is hardly ever seen in organizations. There was some fascinating work in this area a few decades ago, especially in interdependence theory with the investment model […]. This work focused on the precursors of commitment in the workplace and found that, like romantic relationships, the variables of satisfaction, investments, and alternatives played key roles in this process. The result is that when satisfaction and investments are high and alternative opportunities are low, commitment is high. However, it also means that if investments are sufficiently high and alternatives are sufficiently low, then satisfaction can by lowered and commitment will remain high — hence, the investment model is useful for understanding exploitation (Rusbult, Campbell, & Price, 1990).”

“Because they cross formal levels in the organizational hierarchy, supervisory relationships necessarily involve an imbalance in formal power. […] A review by Keltner, Gruenfeld, and Anderson (2003) suggests that power affects how people experience emotions, whether they attend more to rewards or threats, how they process information, and the extent to which they inhibit their behavior around others. The literature clearly suggests that power influences affect, cognition, and behavior in ways that might tend to constrain the formation of positive relationships between individuals with varying degrees of power. […] The power literature is clear in showing that more powerful individuals attend less to their social context, including the people in it, than do less powerful individuals, and the literature suggests that supervisors (compared with subordinates) might tend to place less value on the relationship and be less attuned to their partner’s needs. Yet the formal power accorded to supervisors by the organization — via the supervisory role — is accompanied by the role prescribed responsibility for the performance, motivation, and well-being of subordinates. Thus, the accountability for the formation of a positive supervisory relationship lies more heavily with the supervisor. […] As we examine the qualities of positive supervisory relationships, we make a clear distinction between effective supervisory behaviors and positive supervisory relationships. This is an important distinction […] a large body of leadership research has focused on traits or behaviors of supervisors […] and the affective, motivational, and behavioral responses of employees to those behaviors, with little attention paid to the interactions between the two. There are two practical implications of moving the focus from individuals to relationships: (1) supervisors who use “effective” leadership behaviors may or may not have positive relationships with employees; and (2) supervisors who have a positive relationship with one employee may not have equally positive relationships with other employees, even if they use the same “effective” behaviors.”

There is a large and well-developed stream of research that focuses explicitly on exchanges between supervisors and the employees who report directly to them. Leader–member exchange theory addresses the various types of functional relationships that can be formed between supervisors and subordinates. A core assumption of LMX theory is that supervisors do not have the time or resources to develop equally positive relationships with all subordinates. Thus, to minimize their investment and yield the greatest results for the organization, supervisors would develop close relationships with only a few subordinates […] These few high-quality relationships are marked by high levels of trust, loyalty, and support, whereas the balance of supervisory relationships are contractual in nature and depends on timely rewards allotted by supervisors in direct exchange for desirable behaviors […] There has been considerable confusion and debate in the literature about LMX theory and the construct validity of LMX measures […] Despite shortcomings in LMX research, it is [however] clear that supervisors form relationships of varying quality with subordinates […] Among factors associated with high LMX are the supervisor’s level of agreeableness […] and the employee’s level of extraversion […], feedback seeking […], and (negatively) negative affectivity […]. Those who perceived similarity in terms of family, money, career strategies, goals in life, education […], and gender […] also reported high LMX. […] Employee LMX is strongly related to attitudes, such as job satisfaction […] Supporting the notion that a positive supervisory relationship is good for employees, the LMX literature is replete with studies linking high LMX with thriving and autonomous motivation. […] The premise of the LMX research is that supervisory resources are limited and high-quality relationships are demanding. Thus, supervisor will be most effective when they allocate their resources efficiently and effectively, forming some high-quality and some instrumental relationships. But the empirical research from the lMX literature provides little (if any) evidence that supervisors who differentiate are more effective”.

The norm of negative reciprocity obligates targets of harm to reciprocate with actions that produce roughly equivalent levels of harm — if someone is unkind to me, I should be approximately as unkind to him or her. […] But the trajectory of negative reciprocity differs in important ways when there are power asymmetries between the parties involved in a negative exchange relationship. The workplace revenge literature suggests that low-power targets of hostility generally withhold retaliatory acts. […] In exchange relationships where one actor is more dependent on the other for valued resources, the dependent/less powerful actor’s ability to satisfy his or her self-interests will be constrained […]. Subordinate targets of supervisor hostility should therefore be less able (than supervisor targets of subordinate hostility) to return the injuries they sustain […] To the extent subordinate contributions to negative exchanges are likely to trigger disciplinary responses by the supervisor target (e.g., reprimands, demotion, transfer, or termination), we can expect that subordinates will withhold negative reciprocity.”

“In the last dozen years, much has been learned about the contributions that supervisors make to negative exchanges with subordinates. […] Several dozen studies have examined the consequences of supervisor contributions to negative exchanges. This work suggests that exposure to supervisor hostility is negatively related to subordinates’ satisfaction with the job […], affective commitment to the organization […], and both in-role and extra-role performance contributions […] and is positively related to subordinates’ psychological distress […], problem drinking […], and unit-level counterproductive work behavior […]. Exposure to supervisor hostility has also been linked with family undermining behavior — employees who are the targets of abusive supervision are more likely to be hostile toward their own family members […] Most studies of supervisor hostility have accounted for moderating factors — individual and situational factors that buffer or exacerbate the effects of exposure. For example, Tepper (2000) found that the injurious effects of supervisor hostility on employees’ attitudes and strain reactions were stronger when subordinates have less job mobility and therefore feel trapped in jobs that deplete their coping resources. […] Duffy, Ganster, Shaw, Johnson, and Pagon (2006) found that the effects of supervisor hostility are more pronounced when subordinates are singled out rather than targeted along with multiple coworkers. […] work suggests that the effects of abusive supervision on subordinates’ strain reactions are weaker when subordinates employ impression management strategies […] and more confrontational (as opposed to avoidant) communication tactics […]. It is clear that not all subordinates react the same way to supervisor hostility and characteristics of subordinates and the context influence the trajectory of subordinates’ responses. […] In a meta-analytic examination of studies of the correlates of supervisor-directed hostility, Herschovis et al. (2007) found support for the idea that subordinates who believe that they have been the target of mistreatment are more likely to lash out at their supervisors. […] perhaps just as interesting as the associations that have been uncovered are several hypothesized associations that have not emerged. Greenberg and Barling (1999) found that supervisor-directed aggression was unrelated to subordinates’ alcohol consumption, history of aggression, and job security. Other work has revealed mixed results for the prediction that subordinate self-esteem will negatively predict supervisor-directed hostility (Inness, Barling, & Turner, 2005). […] Negative exchanges between supervisors and subordinates do not play out in isolation — others observe them and are affected by them. Yet little is known about the affective, cognitive, and behavioral responses of third parties to negative exchanges with supervisors.”

August 8, 2018 Posted by | Books, Psychology | Leave a comment

Combinatorics (I)

This book is not a particularly easy read, compared to what is the general format of the series in which it is published, but this is a good thing in my view as it also means the author managed to go into enough details in specific contexts to touch upon at least some properties/topics of interest. You don’t need any specific background knowledge to read and understand the book – at least not any sort of background knowledge one would not expect someone who might decide to read a book like this one to already have – but you do need when reading it to have the sort of mental surplus that enables you to think carefully about what’s going on and devote a few mental resources to understanding the details.

Some quotes and links from the first half of the book below.

“The subject of combinatorial analysis or combinatorics […] [w]e may loosely describe [as] the branch of mathematics concerned with selecting, arranging, constructing, classifying, and counting or listing things. […] the subject involves finite sets or discrete elements that proceed in separate steps […] rather than continuous systems […] Mathematicians sometimes use the term ‘combinatorics’ to refer to a larger subset of discrete mathematics that includes graph theory. In that case, what is commonly called combinatorics is then referred to as ‘enumeration’. […] Combinatorics now includes a wide range of topics, some of which we cover in this book, such as the geometry of tilings and polyhedra […], the theory of graphs […], magic squares and latin squares […], block designs and finite projective planes […], and partitions of numbers […]. [The] chapters [of the book] are largely independent of each other and can be read in any order. Much of combinatorics originated in recreational pastimes […] in recent years the subject has developed in depth and variety and has increasingly become a part of mainstream mathematics. […] Undoubtedly part of the reason for the subject’s recent importance has arisen from the growth of computer science and the increasing use of algorithmic methods for solving real-world practical problems. These have led to combinatorial applications in a wide range of subject areas, both within and outside mathematics, including network analysis, coding theory, probability, virology, experimental design, scheduling, and operations research.”

“[C]ombinatorics is primarily concerned with four types of problem:
Existence problem: Does □□□ exist?
Construction problem: If □□□ exists, how can we construct it?
Enumeration problem: How many □□□ are there?
Optimization problem: Which □□□ is best? […]
[T]hese types of problems are not unrelated; for example, the easiest way to prove that something exists may be to construct it explicitly.”

“In this book we consider two types of enumeration problem – counting problems in which we simply wish to know the number of objects involved, and listing problems in which we want to list them all explicitly. […] It’s useful to have some basic counting rules […] In what follows, all the sets are finite. […] In general we have the following rule; here, subsets are disjoint if they have no objects in common: Addition rule: To find the number of objects in a set, split the set into disjoint subsets, count the objects in each subset, and add the results. […] Subtraction rule: If a set of objects can be split into two subsets A and B, then the number of objects in B is obtained by subtracting the number of objects in A from the number in the whole set. […] The subtraction rule extends easily to sets that are split into more than two subsets with no elements in common. […] the inclusion-exclusion principle […] extends this simple idea to the situation where the subsets may have objects in common. […] In general we have the following result: Multiplication rule: If a counting problem can be split into stages with several options at each stage, then the total number of possibilities is the product of options at each stage. […] Another useful principle in combinatorics is the following: Correspondence rule: We can solve a counting problem if we can put the objects to be counted in one-to-one correspondence with the objects of a set that we have already counted. […] We conclude this section with one more rule: Division rule: If a set of n elements can be split into m disjoint subsets, each of size k, then m = n / k.”

“Every algorithm has a running time […] this may be the time that a computer needs to carry out all the necessary calculations, or the actual number of such calculations. Each problem [also] has an input size […] the running time T usually depends on the input size n. Particularly important, because they’re the most efficient, are the polynomial-time algorithms, where the maximum running time is proportional to a power of the input size […] The collection of all polynomial-time algorithms is called P. […] In contrast, there are inefficient algorithms that don’t take polynomial time, such as the exponential-time algorithms […] At this point we introduce NP, the set of ‘non-deterministic polynomial-time problems’. These are algorithms for which a solution, when given, can be checked in polynomial time. Clearly P is contained in NP, since if a problem can be solved in polynomial time then a solution can certainly be checked in polynomial time – checking solutions is far easier than finding them in the first place. But are they the same? […] Few people people believe that the answer is ‘yes’, but no one has been able to prove that P ≠ NP. […] a problem is NP-complete if its solution in polynomial time means that every NP problem can be solved in polynomial time. […] If there were a polynomial algorithm for just one of them, then polynomial algorithms would exist for the whole lot and P would equal NP. On the other hand, if just one of them has no polynomial algorithm, then none of the others could have a polynomial algorithm either, and P would be different from NP.”

“In how many different ways can n objects be arranged? […] generally, we have the following result: Arrangements: The number of arrangements of n objects is n x (n -1) x (n – 2) x … x 3 x 2 x 1. This number is called n factorial and is denoted by n!. […] The word permutation is used in different ways. We’ll use it to mean an ordered selection without repetition, while others may use it to mean an arrangement […] generally, we have the following rule: Ordered selections without repetition (permutations): If we select k items from a set of n objects, and if the selections are ordered and repetition is not allowed, then the number of possible selections is n x (n – 1) x (n – 2) x … x (n – k +1). We denote this expression by P(n,k). […] Since P(n, n) = n x (n -1) x (n – 2) x … x 3 x 2 x 1 = n!, an arrangement is a permutation for which k = n. […] generally, we have the following result: P(n,k) = n! /(n-k)!. […] unordered selections without repetition are called combinations, giving rise to the words combinatorial and combinatorics. […] generally, we have the following result: Unordered selections without repetition (combinations): If we select k items from a set of n objects, and if the selections are unordered and repetition is not allowed, then the number of possible selections is P(n,k)/k! = n x (n-1) x (n-2) x … x (n – k + 1)/k!. We denote this expression by C(n,k) […] Unordered selections with repetition: If we select k items from a set of n objects, and if the selections are unordered and repetition is allowed, then the number of possible selections is C(n + k – 1, k). […] Combination rule 1: For any numbers k and n with n, C(n,k) = C(n,n-k) […] Combination rule 2: For any numbers n and k with n, C(n, n-k) = n!/(n-k)!(n-(n-k))! = n!/(n-k)!k! = C(n,k). […] Combination rule 3: For any number n, C(n,0) + C(n,1) + C(n,2) + … + C(n,n-1) + C(n,n) = 2n

Links:

Tilings/Tessellation.
Knight’s tour.
Seven Bridges of Königsberg problem.
Three utilities problem.
Four color theorem.
Tarry’s algorithm (p.7) (formulated slightly differently in the book, but it’s the same algorithm).
Polyomino.
Arthur Cayley.
Combinatorial principles.
Minimum connector problem.
Travelling salesman problem.
Algorithmic efficiency. Running time/time complexity.
Boolean satisfiability problem. Cook–Levin theorem.
Combination.
Mersenne primes.
Permutation. Factorial. Stirling’s formula.
Birthday problem.
Varāhamihira.
Manhattan distance.
Fibonacci number.
Pascal’s triangle. Binomial coefficient. Binomial theorem.
Pigeonhole principle.
Venn diagram.
Derangement (combinatorial mathematics).
Tower of Hanoi.
Stable marriage problem. Transversal (combinatorics). Hall’s marriage theorem.
Generating function (the topic covered in the book more specifically is related to a symbolic generator of the subsets of a set, but a brief search yielded no good links to this particular topic – US).
Group theory.
Ferdinand Frobenius. Burnside’s lemma.

August 4, 2018 Posted by | Books, Computer science, Mathematics | 1 Comment

Words

The words below are mostly words which I encountered while reading the books Pocket oncology, Djinn Rummy, Open Sesame, and The Far Side of the World.

Hematochezia. Neuromyotonia. Anoproctitis. Travelator. Brassica. Physiatry. Clivus. Curettage. Colposcopy. Trachelectomy. Photopheresis. Myelophthisis. Apheresis. Vexilloid. Gonfalon. Eutectic. Clerisy. Frippery. Scrip. Bludge.

Illude. Empyrean. Bonzer. Vol-au-vent. Curule. Entrechat. Winceyette. Attar. Woodbine. Corolla. Rennet. Gusset. Jacquard. Antipodean. Chaplet. Thrush. Coloratura. Biryani. Caff. Scrummy.

Beatific. Forecourt. Hurtle. Freemartin. Coleoptera. Hemipode. Bespeak. Dickey. Bilbo. Hale. Grampus. Calenture. Reeve. Cribbing. Fleam. Totipalmate. Bonito. Blackstrake/Black strake. Shank. Caiman.

Chancery. Acullico. Thole. Aorist. Westing. Scorbutic. Voyol. Fribble. Terraqueous. Oviparous. Specktioneer. Aprication. Phalarope. Lough. Hoy. Reel. Trachyte. Woulding. Anthropophagy. Risorgimento.

 

August 2, 2018 Posted by | Books, Language | Leave a comment

Big Data (II)

Below I have added a few observation from the last half of the book, as well as some coverage-related links to topics of interest.

“With big data, using correlation creates […] problems. If we consider a massive dataset, algorithms can be written that, when applied, return a large number of spurious correlations that are totally independent of the views, opinions, or hypotheses of any human being. Problems arise with false correlations — for example, divorce rate and margarine consumption […]. [W]hen the number of variables becomes large, the number of spurious correlations also increases. This is one of the main problems associated with trying to extract useful information from big data, because in doing so, as with mining big data, we are usually looking for patterns and correlations. […] one of the reasons Google Flu Trends failed in its predictions was because of these problems. […] The Google Flu Trends project hinged on the known result that there is a high correlation between the number of flu-related online searches and visits to the doctor’s surgery. If a lot of people in a particular area are searching for flu-related information online, it might then be possible to predict the spread of flu cases to adjoining areas. Since the interest is in finding trends, the data can be anonymized and hence no consent from individuals is required. Using their five-year accumulation of data, which they limited to the same time-frame as the CDC data, and so collected only during the flu season, Google counted the weekly occurrence of each of the fifty million most common search queries covering all subjects. These search query counts were then compared with the CDC flu data, and those with the highest correlation were used in the flu trends model. […] The historical data provided a baseline from which to assess current flu activity on the chosen search terms and by comparing the new real-time data against this, a classification on a scale from 1 to 5, where 5 signified the most severe, was established. Used in the 2011–12 and 2012–13 US flu seasons, Google’s big data algorithm famously failed to deliver. After the flu season ended, its predictions were checked against the CDC’s actual data. […] the Google Flu Trends algorithm over-predicted the number of flu cases by at least 50 per cent during the years it was used.” [For more details on why blind/mindless hypothesis testing/p-value hunting on big data sets is usually a terrible idea, see e.g. Burnham & Anderson, US]

“The data Google used [in the Google Flu Trends algorithm], collected selectively from search engine queries, produced results [with] obvious bias […] for example by eliminating everyone who does not use a computer and everyone using other search engines. Another issue that may have led to poor results was that customers searching Google on ‘flu symptoms’ would probably have explored a number of flu-related websites, resulting in their being counted several times and thus inflating the numbers. In addition, search behaviour changes over time, especially during an epidemic, and this should be taken into account by updating the model regularly. Once errors in prediction start to occur, they tend to cascade, which is what happened with the Google Flu Trends predictions: one week’s errors were passed along to the next week. […] [Similarly,] the Ebola prediction figures published by WHO [during the West African Ebola virus epidemic] were over 50 per cent higher than the cases actually recorded. The problems with both the Google Flu Trends and Ebola analyses were similar in that the prediction algorithms used were based only on initial data and did not take into account changing conditions. Essentially, each of these models assumed that the number of cases would continue to grow at the same rate in the future as they had before the medical intervention began. Clearly, medical and public health measures could be expected to have positive effects and these had not been integrated into the model.”

“Every time a patient visits a doctor’s office or hospital, electronic data is routinely collected. Electronic health records constitute legal documentation of a patient’s healthcare contacts: details such as patient history, medications prescribed, and test results are recorded. Electronic health records may also include sensor data such as Magnetic Resonance Imaging (MRI) scans. The data may be anonymized and pooled for research purposes. It is estimated that in 2015, an average hospital in the USA will store over 600 Tb of data, most of which is unstructured. […] Typically, the human genome contains about 20,000 genes and mapping such a genome requires about 100 Gb of data. […] The interdisciplinary field of bioinformatics has flourished as a consequence of the need to manage and analyze the big data generated by genomics. […] Cloud-based systems give authorized users access to data anywhere in the world. To take just one example, the NHS plans to make patient records available via smartphone by 2018. These developments will inevitably generate more attacks on the data they employ, and considerable effort will need to be expended in the development of effective security methods to ensure the safety of that data. […] There is no absolute certainty on the Web. Since e-documents can be modified and updated without the author’s knowledge, they can easily be manipulated. This situation could be extremely damaging in many different situations, such as the possibility of someone tampering with electronic medical records. […] [S]ome of the problems facing big data systems [include] ensuring they actually work as intended, [that they] can be fixed when they break down, and [that they] are tamper-proof and accessible only to those with the correct authorization.”

“With transactions being made through sales and auction bids, eBay generates approximately 50 Tb of data a day, collected from every search, sale, and bid made on their website by a claimed 160 million active users in 190 countries. […] Amazon collects vast amounts of data including addresses, payment information, and details of everything an individual has ever looked at or bought from them. Amazon uses its data in order to encourage the customer to spend more money with them by trying to do as much of the customer’s market research as possible. In the case of books, for example, Amazon needs to provide not only a huge selection but to focus recommendations on the individual customer. […] Many customers use smartphones with GPS capability, allowing Amazon to collect data showing time and location. This substantial amount of data is used to construct customer profiles allowing similar individuals and their recommendations to be matched. Since 2013, Amazon has been selling customer metadata to advertisers in order to promote their Web services operation […] Netflix collects and uses huge amounts of data to improve customer service, such as offering recommendations to individual customers while endeavouring to provide reliable streaming of its movies. Recommendation is at the heart of the Netflix business model and most of its business is driven by the data-based recommendations it is able to offer customers. Netflix now tracks what you watch, what you browse, what you search for, and the day and time you do all these things. It also records whether you are using an iPad, TV, or something else. […] As well as collecting search data and star ratings, Netflix can now keep records on how often users pause or fast forward, and whether or not they finish watching each programme they start. They also monitor how, when, and where they watched the programme, and a host of other variables too numerous to mention.”

“Data science is becoming a popular study option in universities but graduates so far have been unable to meet the demands of commerce and industry, where positions in data science offer high salaries to experienced applicants. Big data for commercial enterprises is concerned with profit, and disillusionment will set in quickly if an over-burdened data analyst with insufficient experience fails to deliver the expected positive results. All too often, firms are asking for a one-size-fits-all model of data scientist who is expected to be competent in everything from statistical analysis to data storage and data security.”

“In December 2016, Yahoo! announced that a data breach involving over one billion user accounts had occurred in August 2013. Dubbed the biggest ever cyber theft of personal data, or at least the biggest ever divulged by any company, thieves apparently used forged cookies, which allowed them access to accounts without the need for passwords. This followed the disclosure of an attack on Yahoo! in 2014, when 500 million accounts were compromised. […] The list of big data security breaches increases almost daily. Data theft, data ransom, and data sabotage are major concerns in a data-centric world. There have been many scares regarding the security and ownership of personal digital data. Before the digital age we used to keep photos in albums and negatives were our backup. After that, we stored our photos electronically on a hard-drive in our computer. This could possibly fail and we were wise to have back-ups but at least the files were not publicly accessible. Many of us now store data in the Cloud. […] If you store all your photos in the Cloud, it’s highly unlikely with today’s sophisticated systems that you would lose them. On the other hand, if you want to delete something, maybe a photo or video, it becomes difficult to ensure all copies have been deleted. Essentially you have to rely on your provider to do this. Another important issue is controlling who has access to the photos and other data you have uploaded to the Cloud. […] although the Internet and Cloud-based computing are generally thought of as wireless, they are anything but; data is transmitted through fibre-optic cables laid under the oceans. Nearly all digital communication between continents is transmitted in this way. My email will be sent via transatlantic fibre-optic cables, even if I am using a Cloud computing service. The Cloud, an attractive buzz word, conjures up images of satellites sending data across the world, but in reality Cloud services are firmly rooted in a distributed network of data centres providing Internet access, largely through cables. Fibre-optic cables provide the fastest means of data transmission and so are generally preferable to satellites.”

Links:

Health care informatics.
Electronic health records.
European influenza surveillance network.
Overfitting.
Public Health Emergency of International Concern.
Virtual Physiological Human project.
Watson (computer).
Natural language processing.
Anthem medical data breach.
Electronic delay storage automatic calculator (EDSAC). LEO (computer). ICL (International Computers Limited).
E-commerce. Online shopping.
Pay-per-click advertising model. Google AdWords. Click fraud. Targeted advertising.
Recommender system. Collaborative filtering.
Anticipatory shipping.
BlackPOS Malware.
Data Encryption Standard algorithm. EFF DES cracker.
Advanced Encryption Standard.
Tempora. PRISM (surveillance program). Edward Snowden. WikiLeaks. Tor (anonymity network). Silk Road (marketplace). Deep web. Internet of Things.
Songdo International Business District. Smart City.
United Nations Global Pulse.

July 19, 2018 Posted by | Books, Computer science, Cryptography, Data, Engineering, Epidemiology, Statistics | Leave a comment

Developmental Biology (II)

Below I have included some quotes from the middle chapters of the book and some links related to the topic coverage. As I already pointed out earlier, this is an excellent book on these topics.

Germ cells have three key functions: the preservation of the genetic integrity of the germline; the generation of genetic diversity; and the transmission of genetic information to the next generation. In all but the simplest animals, the cells of the germline are the only cells that can give rise to a new organism. So, unlike body cells, which eventually all die, germ cells in a sense outlive the bodies that produced them. They are, therefore, very special cells […] In order that the number of chromosomes is kept constant from generation to generation, germ cells are produced by a specialized type of cell division, called meiosis, which halves the chromosome number. Unless this reduction by meiosis occurred, the number of chromosomes would double each time the egg was fertilized. Germ cells thus contain a single copy of each chromosome and are called haploid, whereas germ-cell precursor cells and the other somatic cells of the body contain two copies and are called diploid. The halving of chromosome number at meiosis means that when egg and sperm come together at fertilization, the diploid number of chromosomes is restored. […] An important property of germ cells is that they remain pluripotent—able to give rise to all the different types of cells in the body. Nevertheless, eggs and sperm in mammals have certain genes differentially switched off during germ-cell development by a process known as genomic imprinting […] Certain genes in eggs and sperm are imprinted, so that the activity of the same gene is different depending on whether it is of maternal or paternal origin. Improper imprinting can lead to developmental abnormalities in humans. At least 80 imprinted genes have been identified in mammals, and some are involved in growth control. […] A number of developmental disorders in humans are associated with imprinted genes. Infants with Prader-Willi syndrome fail to thrive and later can become extremely obese; they also show mental retardation and mental disturbances […] Angelman syndrome results in severe motor and mental retardation. Beckwith-Wiedemann syndrome is due to a generalized disruption of imprinting on a region of chromosome 7 and leads to excessive foetal overgrowth and an increased predisposition to cancer.”

“Sperm are motile cells, typically designed for activating the egg and delivering their nucleus into the egg cytoplasm. They essentially consist of a nucleus, mitochondria to provide an energy source, and a flagellum for movement. The sperm contributes virtually nothing to the organism other than its chromosomes. In mammals, sperm mitochondria are destroyed following fertilization, and so all mitochondria in the animal are of maternal origin. […] Different organisms have different ways of ensuring fertilization by only one sperm. […] Early development is similar in both male and female mammalian embryos, with sexual differences only appearing at later stages. The development of the individual as either male or female is genetically fixed at fertilization by the chromosomal content of the egg and sperm that fuse to form the fertilized egg. […] Each sperm carries either an X or Y chromosome, while the egg has an X. The genetic sex of a mammal is thus established at the moment of conception, when the sperm introduces either an X or a Y chromosome into the egg. […] In the absence of a Y chromosome, the default development of tissues is along the female pathway. […] Unlike animals, plants do not set aside germ cells in the embryo and germ cells are only specified when a flower develops. Any meristem cell can, in principle, give rise to a germ cell of either sex, and there are no sex chromosomes. The great majority of flowering plants give rise to flowers that contain both male and female sexual organs, in which meiosis occurs. The male sexual organs are the stamens; these produce pollen, which contains the male gamete nuclei corresponding to the sperm of animals. At the centre of the flower are the female sex organs, which consist of an ovary of two carpels, which contain the ovules. Each ovule contains an egg cell.”

“The character of specialized cells such as nerve, muscle, or skin is the result of a particular pattern of gene activity that determines which proteins are synthesized. There are more than 200 clearly recognizable differentiated cell types in mammals. How these particular patterns of gene activity develop is a central question in cell differentiation. Gene expression is under a complex set of controls that include the actions of transcription factors, and chemical modification of DNA. External signals play a key role in differentiation by triggering intracellular signalling pathways that affect gene expression. […] the central feature of cell differentiation is a change in gene expression, which brings about a change in the proteins in the cells. The genes expressed in a differentiated cell include not only those for a wide range of ‘housekeeping’ proteins, such as the enzymes involved in energy metabolism, but also genes encoding cell-specific proteins that characterize a fully differentiated cell: hemoglobin in red blood cells, keratin in skin epidermal cells, and muscle-specific actin and myosin protein filaments in muscle. […] several thousand different genes are active in any given cell in the embryo at any one time, though only a small number of these may be involved in specifying cell fate or differentiation. […] Cell differentiation is known to be controlled by a wide range of external signals but it is important to remember that, while these external signals are often referred to as being ‘instructive’, they are ‘selective’, in the sense that the number of developmental options open to a cell at any given time is limited. These options are set by the cell’s internal state which, in turn, reflects its developmental history. External signals cannot, for example, convert an endodermal cell into a muscle or nerve cell. Most of the molecules that act as developmentally important signals between cells during development are proteins or peptides, and their effect is usually to induce a change in gene expression. […] The same external signals can be used again and again with different effects because the cells’ histories are different. […] At least 1,000 different transcription factors are encoded in the genomes of the fly and the nematode, and as many as 3,000 in the human genome. On average, around five different transcription factors act together at a control region […] In general, it can be assumed that activation of each gene involves a unique combination of transcription factors.”

“Stem cells involve some special features in relation to differentiation. A single stem cell can divide to produce two daughter cells, one of which remains a stem cell while the other gives rise to a lineage of differentiating cells. This occurs in our skin and gut all the time and also in the production of blood cells. It also occurs in the embryo. […] Embryonic stem (ES) cells from the inner cell mass of the early mammalian embryo when the primitive streak forms, can, in culture, differentiate into a wide variety of cell types, and have potential uses in regenerative medicine. […] it is now possible to make adult body cells into stem cells, which has important implications for regenerative medicine. […] The goal of regenerative medicine is to restore the structure and function of damaged or diseased tissues. As stem cells can proliferate and differentiate into a wide range of cell types, they are strong candidates for use in cell-replacement therapy, the restoration of tissue function by the introduction of new healthy cells. […] The generation of insulin-producing pancreatic β cells from ES cells to replace those destroyed in type 1 diabetes is a prime medical target. Treatments that direct the differentiation of ES cells towards making endoderm derivatives such as pancreatic cells have been particularly difficult to find. […] The neurodegenerative Parkinson disease is another medical target. […] To generate […] stem cells of the patient’s own tissue type would be a great advantage, and the recent development of induced pluripotent stem cells (iPS cells) offers […] exciting new opportunities. […] There is [however] risk of tumour induction in patients undergoing cell-replacement therapy with ES cells or iPS cells; undifferentiated pluripotent cells introduced into the patient could cause tumours. Only stringent selection procedures that ensure no undifferentiated cells are present in the transplanted cell population will overcome this problem. And it is not yet clear how stable differentiated ES cells and iPS cells will be in the long term.”

“In general, the success rate of cloning by body-cell nuclear transfer in mammals is low, and the reasons for this are not yet well understood. […] Most cloned mammals derived from nuclear transplantation are usually abnormal in some way. The cause of failure is incomplete reprogramming of the donor nucleus to remove all the earlier modifications. A related cause of abnormality may be that the reprogrammed genes have not gone through the normal imprinting process that occurs during germ-cell development, where different genes are silenced in the male and female parents. The abnormalities in adults that do develop from cloned embryos include early death, limb deformities and hypertension in cattle, and immune impairment in mice. All these defects are thought to be due to abnormalities of gene expression that arise from the cloning process. Studies have shown that some 5% of the genes in cloned mice are not correctly expressed and that almost half of the imprinted genes are incorrectly expressed.”

“Organ development involves large numbers of genes and, because of this complexity, general principles can be quite difficult to distinguish. Nevertheless, many of the mechanisms used in organogenesis are similar to those of earlier development, and certain signals are used again and again. Pattern formation in development in a variety of organs can be specified by position information, which is specified by a gradient in some property. […] Not surprisingly, the vascular system, including blood vessels and blood cells, is among the first organ systems to develop in vertebrate embryos, so that oxygen and nutrients can be delivered to the rapidly developing tissues. The defining cell type of the vascular system is the endothelial cell, which forms the lining of the entire circulatory system, including the heart, veins, and arteries. Blood vessels are formed by endothelial cells and these vessels are then covered by connective tissue and smooth muscle cells. Arteries and veins are defined by the direction of blood flow as well as by structural and functional differences; the cells are specified as arterial or venous before they form blood vessels but they can switch identity. […] Differentiation of the vascular cells requires the growth factor VEGF (vascular endothelial growth factor) and its receptors, and VEGF stimulates their proliferation. Expression of the Vegf gene is induced by lack of oxygen and thus an active organ using up oxygen promotes its own vascularization. New blood capillaries are formed by sprouting from pre-existing blood vessels and proliferation of cells at the tip of the sprout. […] During their development, blood vessels navigate along specific paths towards their targets […]. Many solid tumours produce VEGF and other growth factors that stimulate vascular development and so promote the tumour’s growth, and blocking new vessel formation is thus a means of reducing tumour growth. […] In humans, about 1 in 100 live-born infants has some congenital heart malformation, while in utero, heart malformation leading to death of the embryo occurs in between 5 and 10% of conceptions.”

“Separation of the digits […] is due to the programmed cell death of the cells between these digits’ cartilaginous elements. The webbed feet of ducks and other waterfowl are simply the result of less cell death between the digits. […] the death of cells between the digits is essential for separating the digits. The development of the vertebrate nervous system also involves the death of large numbers of neurons.”

Links:

Budding.
Gonad.
Down Syndrome.
Fertilization. In vitro fertilisation. Preimplantation genetic diagnosis.
SRY gene.
X-inactivation. Dosage compensation.
Cellular differentiation.
MyoD.
Signal transduction. Enhancer (genetics).
Epigenetics.
Hematopoiesis. Hematopoietic stem cell transplantation. Hemoglobin. Sickle cell anemia.
Skin. Dermis. Fibroblast. Epidermis.
Skeletal muscle. Myogenesis. Myoblast.
Cloning. Dolly.
Organogenesis.
Limb development. Limb bud. Progress zone model. Apical ectodermal ridge. Polarizing region/Zone of polarizing activity. Sonic hedgehog.
Imaginal disc. Pax6. Aniridia. Neural tube.
Branching morphogenesis.
Pistil.
ABC model of flower development.

July 16, 2018 Posted by | Biology, Books, Botany, Cancer/oncology, Diabetes, Genetics, Medicine, Molecular biology, Ophthalmology | Leave a comment

Big Data (I?)

Below a few observations from the first half of the book, as well as some links related to the topic coverage.

“The data we derive from the Web can be classified as structured, unstructured, or semi-structured. […] Carefully structured and tabulated data is relatively easy to manage and is amenable to statistical analysis, indeed until recently statistical analysis methods could be applied only to structured data. In contrast, unstructured data is not so easily categorized, and includes photos, videos, tweets, and word-processing documents. Once the use of the World Wide Web became widespread, it transpired that many such potential sources of information remained inaccessible because they lacked the structure needed for existing analytical techniques to be applied. However, by identifying key features, data that appears at first sight to be unstructured may not be completely without structure. Emails, for example, contain structured metadata in the heading as well as the actual unstructured message […] and so may be classified as semi-structured data. Metadata tags, which are essentially descriptive references, can be used to add some structure to unstructured data. […] Dealing with unstructured data is challenging: since it cannot be stored in traditional databases or spreadsheets, special tools have had to be developed to extract useful information. […] Approximately 80 per cent of the world’s data is unstructured in the form of text, photos, and images, and so is not amenable to the traditional methods of structured data analysis. ‘Big data’ is now used to refer not just to the total amount of data generated and stored electronically, but also to specific datasets that are large in both size and complexity, with which new algorithmic techniques are required in order to extract useful information from them.”

“In the digital age we are no longer entirely dependent on samples, since we can often collect all the data we need on entire populations. But the size of these increasingly large sets of data cannot alone provide a definition for the term ‘big data’ — we must include complexity in any definition. Instead of carefully constructed samples of ‘small data’ we are now dealing with huge amounts of data that has not been collected with any specific questions in mind and is often unstructured. In order to characterize the key features that make data big and move towards a definition of the term, Doug Laney, writing in 2001, proposed using the three ‘v’s: volume, variety, and velocity. […] ‘Volume’ refers to the amount of electronic data that is now collected and stored, which is growing at an ever-increasing rate. Big data is big, but how big? […] Generally, we can say the volume criterion is met if the dataset is such that we cannot collect, store, and analyse it using traditional computing and statistical methods. […] Although a great variety of data [exists], ultimately it can all be classified as structured, unstructured, or semi-structured. […] Velocity is necessarily connected with volume: the faster the data is generated, the more there is. […] Velocity also refers to the speed at which data is electronically processed. For example, sensor data, such as that generated by an autonomous car, is necessarily generated in real time. If the car is to work reliably, the data […] must be analysed very quickly […] Variability may be considered as an additional dimension of the velocity concept, referring to the changing rates in flow of data […] computer systems are more prone to failure [during peak flow periods]. […] As well as the original three ‘v’s suggested by Laney, we may add ‘veracity’ as a fourth. Veracity refers to the quality of the data being collected. […] Taken together, the four main characteristics of big data – volume, variety, velocity, and veracity – present a considerable challenge in data management.” [As regular readers of this blog might be aware, not everybody would agree with the author here about the inclusion of veracity as a defining feature of big data – “Many have suggested that there are more V’s that are important to the big data problem [than volume, variety & velocity] such as veracity and value (IEEE BigData 2013). Veracity refers to the trustworthiness of the data, and value refers to the value that the data adds to creating knowledge about a topic or situation. While we agree that these are important data characteristics, we do not see these as key features that distinguish big data from regular data. It is important to evaluate the veracity and value of all data, both big and small. (Knoth & Schmid)]

“Anyone who uses a personal computer, laptop, or smartphone accesses data stored in a database. Structured data, such as bank statements and electronic address books, are stored in a relational database. In order to manage all this structured data, a relational database management system (RDBMS) is used to create, maintain, access, and manipulate the data. […] Once […] the database [has been] constructed we can populate it with data and interrogate it using structured query language (SQL). […] An important aspect of relational database design involves a process called normalization which includes reducing data duplication to a minimum and hence reduces storage requirements. This allows speedier queries, but even so as the volume of data increases the performance of these traditional databases decreases. The problem is one of scalability. Since relational databases are essentially designed to run on just one server, as more and more data is added they become slow and unreliable. The only way to achieve scalability is to add more computing power, which has its limits. This is known as vertical scalability. So although structured data is usually stored and managed in an RDBMS, when the data is big, say in terabytes or petabytes and beyond, the RDBMS no longer works efficiently, even for structured data. An important feature of relational databases and a good reason for continuing to use them is that they conform to the following group of properties: atomicity, consistency, isolation, and durability, usually known as ACID. Atomicity ensures that incomplete transactions cannot update the database; consistency excludes invalid data; isolation ensures one transaction does not interfere with another transaction; and durability means that the database must update before the next transaction is carried out. All these are desirable properties but storing and accessing big data, which is mostly unstructured, requires a different approach. […] given the current data explosion there has been intensive research into new storage and management techniques. In order to store these massive datasets, data is distributed across servers. As the number of servers involved increases, the chance of failure at some point also increases, so it is important to have multiple, reliably identical copies of the same data, each stored on a different server. Indeed, with the massive amounts of data now being processed, systems failure is taken as inevitable and so ways of coping with this are built into the methods of storage.”

“A distributed file system (DFS) provides effective and reliable storage for big data across many computers. […] Hadoop DFS [is] one of the most popular DFS […] When we use Hadoop DFS, the data is distributed across many nodes, often tens of thousands of them, physically situated in data centres around the world. […] The NameNode deals with all requests coming in from a client computer; it distributes storage space, and keeps track of storage availability and data location. It also manages all the basic file operations (e.g. opening and closing files) and controls data access by client computers. The DataNodes are responsible for actually storing the data and in order to do so, create, delete, and replicate blocks as necessary. Data replication is an essential feature of the Hadoop DFS. […] It is important that several copies of each block are stored so that if a DataNode fails, other nodes are able to take over and continue with processing tasks without loss of data. […] Data is written to a DataNode only once but will be read by an application many times. […] One of the functions of the NameNode is to determine the best DataNode to use given the current usage, ensuring fast data access and processing. The client computer then accesses the data block from the chosen node. DataNodes are added as and when required by the increased storage requirements, a feature known as horizontal scalability. One of the main advantages of Hadoop DFS over a relational database is that you can collect vast amounts of data, keep adding to it, and, at that time, not yet have any clear idea of what you want to use it for. […] structured data with identifiable rows and columns can be easily stored in a RDBMS while unstructured data can be stored cheaply and readily using a DFS.”

NoSQL is the generic name used to refer to non-relational databases and stands for Not only SQL. […] The non-relational model has some features that are necessary in the management of big data, namely scalability, availability, and performance. With a relational database you cannot keep scaling vertically without loss of function, whereas with NoSQL you scale horizontally and this enables performance to be maintained. […] Within the context of a distributed database system, consistency refers to the requirement that all copies of data should be the same across nodes. […] Availability requires that if a node fails, other nodes still function […] Data, and hence DataNodes, are distributed across physically separate servers and communication between these machines will sometimes fail. When this occurs it is called a network partition. Partition tolerance requires that the system continues to operate even if this happens. In essence, what the CAP [Consistency, Availability, Partition Tolerance] Theorem states is that for any distributed computer system, where the data is shared, only two of these three criteria can be met. There are therefore three possibilities; the system must be: consistent and available, consistent and partition tolerant, or partition tolerant and available. Notice that since in a RDMS the network is not partitioned, only consistency and availability would be of concern and the RDMS model meets both of these criteria. In NoSQL, since we necessarily have partitioning, we have to choose between consistency and availability. By sacrificing availability, we are able to wait until consistency is achieved. If we choose instead to sacrifice consistency it follows that sometimes the data will differ from server to server. The somewhat contrived acronym BASE (Basically Available, Soft, and Eventually consistent) is used as a convenient way of describing this situation. BASE appears to have been chosen in contrast to the ACID properties of relational databases. ‘Soft’ in this context refers to the flexibility in the consistency requirement. The aim is not to abandon any one of these criteria but to find a way of optimizing all three, essentially a compromise. […] The name NoSQL derives from the fact that SQL cannot be used to query these databases. […] There are four main types of non-relational or NoSQL database: key-value, column-based, document, and graph – all useful for storing large amounts of structured and semi-structured data. […] Currently, an approach called NewSQL is finding a niche. […] the aim of this latent technology is to solve the scalability problems associated with the relational model, making it more useable for big data.”

“A popular way of dealing with big data is to divide it up into small chunks and then process each of these individually, which is basically what MapReduce does by spreading the required calculations or queries over many, many computers. […] Bloom filters are particularly suited to applications where storage is an issue and where the data can be thought of as a list. The basic idea behind Bloom filters is that we want to build a system, based on a list of data elements, to answer the question ‘Is X in the list?’ With big datasets, searching through the entire set may be too slow to be useful, so we use a Bloom filter which, being a probabilistic method, is not 100 per cent accurate—the algorithm may decide that an element belongs to the list when actually it does not; but it is a fast, reliable, and storage efficient method of extracting useful knowledge from data. Bloom filters have many applications. For example, they can be used to check whether a particular Web address leads to a malicious website. In this case, the Bloom filter would act as a blacklist of known malicious URLs against which it is possible to check, quickly and accurately, whether it is likely that the one you have just clicked on is safe or not. Web addresses newly found to be malicious can be added to the blacklist. […] A related example is that of malicious email messages, which may be spam or may contain phishing attempts. A Bloom filter provides us with a quick way of checking each email address and hence we would be able to issue a timely warning if appropriate. […] they can [also] provide a very useful way of detecting fraudulent credit card transactions.”

Links:

Data.
Punched card.
Clickstream log.
HTTP cookie.
Australian Square Kilometre Array Pathfinder.
The Millionaire Calculator.
Data mining.
Supervised machine learning.
Unsupervised machine learning.
Statistical classification.
Cluster analysis.
Moore’s Law.
Cloud storage. Cloud computing.
Data compression. Lossless data compression. Lossy data compression.
ASCII. Huffman algorithm. Variable-length encoding.
Data compression ratio.
Grayscale.
Discrete cosine transform.
JPEG.
Bit array. Hash function.
PageRank algorithm.
Common crawl.

July 14, 2018 Posted by | Books, Computer science, Data, Statistics | Leave a comment

American Naval History (II)

I have added some observations and links related to the second half of the book‘s coverage below.

“The revival of the U.S. Navy in the last two decades of the nineteenth century resulted from a variety of circumstances. The most immediate was the simple fact that the several dozen ships retained from the Civil War were getting so old that they had become antiques. […] In 1883 therefore Congress authorized the construction of three new cruisers and one dispatch vessel, its first important naval appropriation since Appomattox. […] By 1896 […] five […] new battleships had been completed and launched, and a sixth (the Iowa) joined them a year later. None of these ships had been built to meet a perceived crisis or a national emergency. Instead the United States had finally embraced the navalist argument that a mature nation-state required a naval force of the first rank. Soon enough circumstances would offer an opportunity to test both the ships and the theory. […] the United States declared war against Spain on April 25, 1898. […] Active hostilities lasted barely six months and were punctuated by two entirely one-sided naval engagements […] With the peace treaty signed in Paris in December 1898, Spain granted Cuba its independence, though the United States assumed significant authority on the island and in 1903 negotiated a lease that gave the U.S. Navy control of Guantánamo Bay on Cuba’s south coast. Spain also ceded the Philippines, Puerto Rico, Guam, and Wake Island to the United States, which paid Spain $20 million for them. Separately but simultaneously the annexation of the Kingdom of Hawaii, along with the previous annexation of Midway, gave the United States a series of Pacific Ocean stepping stones, each a potential refueling stop, that led from Hawaii to Midway, to Wake, to Guam, and to the Philippines. It made the United States not merely a continental power but a global power. […] between 1906 and 1908, no fewer than thirteen new battleships joined the fleet.”

“At root submarine warfare in the twentieth century was simply a more technologically advanced form of commerce raiding. In its objective it resembled both privateering during the American Revolution and the voyages of the CSS Alabama and other raiders during the Civil War. Yet somehow striking unarmed merchant ships from the depths, often without warning, seemed particularly heinous. Just as the use of underwater mines in the Civil War had horrified contemporaries before their use became routine, the employment of submarines against merchant shipping shocked public sentiment in the early months of World War I. […] American submarines accounted for 55 percent of all Japanese ship losses in the Pacific theater of World War II”.

“By late 1942 the first products of the Two-Ocean Navy Act of 1940 began to join the fleet. Whereas in June 1942, the United States had been hard-pressed to assemble three aircraft carriers for the Battle of Midway, a year later twenty-four new Essex-class aircraft carriers joined the fleet, each of them displacing more than 30,000 tons and carrying ninety to one hundred aircraft. Soon afterward nine more Independence-class carriers joined the fleet. […] U.S. shipyards also turned out an unprecedented number of cruisers, destroyers, and destroyer escorts, plus more than 2,700 Liberty Ships—the essential transport and cargo vessels of the war—as well as thousands of specialized landing ships essential to amphibious operations. In 1943 alone American shipyards turned out more than eight hundred of the large LSTs and LCIs, plus more than eight thousand of the smaller landing craft known as Higgins boats […] In the three weeks after D-Day, Allied landing ships and transports put more than 300,000 men, fifty thousand vehicles, and 150,000 tons of supplies ashore on Omaha Beach alone. By the first week of July the Allies had more than a million fully equipped soldiers ashore ready to break out of their enclave in Normandy and Brittany […] Having entered World War II with eleven active battleships and seven aircraft carriers, the U.S. Navy ended the war with 120 battleships and cruisers and nearly one hundred aircraft carriers (including escort carriers). Counting the smaller landing craft, the U.S. Navy listed an astonishing sixty-five thousand vessels on its register of warships and had more than four million men and women in uniform. It was more than twice as large as all the rest of the navies of the world combined. […] In the eighteen months after the end of the war, the navy processed out 3.5 million officers and enlisted personnel who returned to civilian life and their families, going back to work or attending college on the new G.I. Bill. In addition thousands of ships were scrapped or mothballed, assigned to what was designated as the National Defense Reserve Fleet and tied up in long rows at navy yards from California to Virginia. Though the navy boasted only about a thousand ships on active service by the end of 1946, that was still more than twice as many as before the war.”

“The Korean War ended in a stalemate, yet American forces, supported by troops from South Korea and other United Nations members, succeeded in repelling the first cross-border invasion by communist forces during the Cold War. That encouraged American lawmakers to continue support of a robust peacetime navy, and of military forces generally. Whereas U.S. military spending in 1950 had totaled $141 billion, for the rest of the 1950s it averaged over $350 billion per year. […] The overall architecture of American and Soviet rivalry influenced, and even defined, virtually every aspect of American foreign and defense policy in the Cold War years. Even when the issue at hand had little to do with the Soviet Union, every political and military dispute from 1949 onward was likely to be viewed through the prism of how it affected the East-West balance of power. […] For forty years the United States and the U.S. Navy had centered all of its attention on the rivalry with the Soviet Union. All planning for defense budgets, for force structure, and for the design of weapons systems grew out of assessments of the Soviet threat. The dissolution of the Soviet Union therefore compelled navy planners to revisit almost all of their assumptions. It did not erase the need for a global U.S. Navy, for even as the Soviet Union was collapsing, events in the Middle East and elsewhere provoked serial crises that led to the dispatch of U.S. naval combat groups to a variety of hot spots around the world. On the other hand, these new threats were so different from those of the Cold War era that the sophisticated weaponry the United States had developed to deter and, if necessary, defeat the Soviet Union did not necessarily meet the needs of what President George H. W. Bush called “a new world order.”

“The official roster of U.S. Navy warships in 2014 listed 283 “battle force ships” on active service. While that is fewer than at any time since World War I, those ships possess more capability and firepower than the rest of the world’s navies combined. […] For the present, […] as well as for the foreseeable future, the U.S. Navy remains supreme on the oceans of the world.”

Links:

USS Ticonderoga (1862).
Virginius Affair.
ABCD ships.
Stephen Luce. Naval War College.
USS Maine. USS Texas. USS Indiana (BB-1). USS Massachusetts (BB-2). USS Oregon (BB-3). USS Iowa (BB-4).
Benjamin Franklin Tracy.
Alfred Thayer Mahan. The Influence of Sea Power upon History: 1660–1783.
George Dewey.
William T. Sampson.
Great White Fleet.
USS Maine (BB-10). USS Missouri (BB-11). USS New Hampshire (BB-25).
HMS Dreadnought (1906)Dreadnought. Pre-dreadnought battleship.
Hay–Herrán Treaty. United States construction of the Panama canal, 1904–1914.
Bradley A. Fiske.
William S. Benson. Chief of Naval Operations.
RMS Lusitania. Unrestricted submarine warfare.
Battle of Jutland. Naval Act of 1916 (‘Big Navy Act of 1916’).
William Sims.
Sacred Twenty. WAVES.
Washington Naval Treaty. ‘Treaty cruisers‘.
Aircraft carrier. USS Lexington (CV-2). USS Saratoga (CV-3).
War Plan Orange.
Carl Vinson. Naval Act of 1938.
Lend-Lease.
Battle of the Coral Sea. Battle of Midway.
Ironbottom Sound.
Battle of the Atlantic. Wolfpack (naval tactic).
Operation Torch.
Pacific Ocean theater of World War II. Battle of Leyte Gulf.
Operation Overlord. Operation Neptune. Alan Goodrich Kirk. Bertram Ramsay.
Battle of Iwo Jima. Battle of Okinawa.
Cold War. Revolt of the Admirals.
USS Nautilus. SSBN. USS George Washington.
Ohio-class submarine.
UGM-27 PolarisUGM-73 Poseidon. UGM-96 Trident I.
Korean War. Battle of Inchon.
United States Sixth Fleet.
Cuban Missile Crisis.
Vietnam War. USS Maddox. Gulf of Tonkin Resolution. Operation Market Time. Patrol Craft FastPatrol Boat, River. Operation Game Warden.
Elmo Zumwalt. ‘Z-grams’.
USS Cole bombing.
Operation Praying Mantis.
Gulf War.
Combined Task Force 150.
United States Navy SEALs.
USS Zumwalt.

July 12, 2018 Posted by | Books, History, Wikipedia | Leave a comment

100 Cases in Orthopaedics and Rheumatology (II)

Below I have added some links related to the last half of the book’s coverage, as well as some more observations from the book.

Scaphoid fracture. Watson’s test. Dorsal intercalated segment instability. (“Non-union is not uncommon as a complication after scaphoid fractures because the blood supply to this bone is poor. Smokers have a higher incidence of non-union. Occasionally, the blood supply is poor enough to lead to avascular necrosis. If non-union is not detected, subsequent arthritis in the wrist can develop.”)
Septic arthritis. (“Septic arthritis is an orthopaedic emergency. […] People with septic arthritis are typically unwell with fevers and malaise and the joint pain is severe. […] Any acutely hot or painful joint is septic arthritis until proven otherwise.”)
Rheumatoid arthritis. (“[RA is] the most common of the inflammatory arthropathies. […] early-morning stiffness and pain, combined with soft-tissue rather than bony swelling, are classic patterns for inflammatory disease. Although […] RA affects principally the small joints of the hands (and feet), it may progress to involve any synovial joint and may be complicated by extra-articular features […] family history [of the disease] is not unusual due to the presence of susceptibility genes such as HLA-DR. […] Not all patients with RA have rheumatoid factor (RF), and not all patients with RF have RA; ACPA has greater specificity for RA than rheumatoid factor. […] Medical therapy focuses on disease-modifying anti-rheumatic drugs (DMARDs) such as methotrexate, sulphasalazine, leflunomide and hydroxychloroquine which may be used individually or in combination. […] Disease activity in RA is measured by the disease activity score (DAS), which is a composite score of the clinical evidence of synovitis, the current inflammatory response and the patient’s own assessment of their health. […] Patients who have high disease activity as determined by the DAS and have either failed or failed to tolerate standard disease modifying therapy qualify for biologic therapy – monoclonal antibodies that are directed against key components of the inflammatory response. […] TNF-α blockade is highly effective in up to 70 per cent of patients, reducing both inflammation and the progressive structural damage associated with severe active disease.”)
Ankylosing spondylitis. Ankylosis. Schober’s index. Costochondritis.
Mononeuritis multiplex. (“Mononeuritis multiplex arises due to interruption of the vasa nervorum, the blood supply to peripheral nerves […] Mononeuritis multiplex is commonly caused by diabetes or vasculitis. […] Vasculitis – inflammation of blood vessels and subsequent obstruction to blood flow – can be primary (idiopathic) or secondary, in which case it is associated with an underlying condition such as rheumatoid arthritis. The vasculitides are classified according to the size of the vessel involved. […] Management of mononeuritis multiplex is based on potent immunosuppression […] and the treatment of underlying infections such as hepatitis.”)
Multiple myeloma. Bence-Jones protein. (“The combination of bone pain and elevated ESR and calcium is suggestive of multiple myeloma.”)
Osteoporosis. DEXA scan. T-score. (“Postmenopausal bone loss is the most common cause of osteoporosis, but secondary osteoporosis may occur in the context of a number of medical conditions […] Steroid-induced osteoporosis is a significant problem in medical practice. […] All patients receiving corticosteroids should have bone protection […] Pharmacological treatment in the form of calcium supplementation and biphosphonates to reduce osteoclast activity is effective but compliance is typically poor.”)
Osteomalacia. Rickets. Craniotabes.
Paget’s disease (see also this post). (“In practical terms, the main indication to treat Paget’s disease is pain […] although bone deformity or compression syndromes (or risk thereof) would also prompt therapy. The treatment of choice is a biphosphonate to diminish osteoclast activity”).
Stress fracture. Female athlete triad. (“Stress fractures are overuse injuries and occur when periosteal resorption exceeds bone formation. They are commonly seen in two main patient groups: soldiers may suffer so-called march fractures in the metatarsals, while athletes may develop them in different sites according to their sporting activity. Although the knee is a common site in runners due to excess mechanical loading, stress fractures may also result in non-weight-bearing sites due to repetitive and excessive traction […]. The classic symptom […] is of pain that occurs throughout running and crucially persists with rest; this is in contrast to shin splints, a traction injury to the tibial periosteum in which the pain diminishes somewhat with continued activity […] The crucial feature of rehabilitation is a graded return to sport to prevent progression or recurrence.”)
Psoriatic arthritis. (“Arthropathy and rash is a common combination in rheumatology […] Psoriatic arthritis is a common inflammatory arthropathy that affects up to 15 per cent of those with psoriasis. […] Nail disease is very helpful in differentiating psoriatic arthritis from other forms of inflammatory arthropathy.”)
Ehlers–Danlos syndromes. Marfan syndrome. Beighton (hypermobility) score.
Carpal tunnel syndrome. (“Carpal tunnel syndrome is the most common entrapment neuropathy […] The classic symptoms are of tingling in the sensory distribution of the median nerve (i.e. the lateral three and a half digits); loss of thumb abduction is a late feature. Symptoms are often worse at night (when the hand might be quite painful) and in certain postures […] The majority of cases are idiopathic, but pregnancy and rheumatoid arthritis are very common precipitating causes […] The majority of patients will respond well to conservative management […] If these measures fail, corticosteroid injection into the carpal tunnel can be very effective in up to 80 per cent of patients. Surgical decompression should be reserved for those with persistent disabling symptoms or motor loss.”)
Mixed connective tissue disease.
Crystal arthropathy. Tophus. Uric acid nephropathyChondrocalcinosis. (“In any patient presenting with an acutely painful and swollen joint, the most important diagnoses to consider are septic arthritis and crystal arthropathy. Crystal arthropathy such as gout is more common than septic arthritis […] Gout may be precipitated by diuretics, renal impairment and aspirin use”).
Familial Mediterranean fever. Amyloidosis.
Systemic lupus erythematosus (see also this). Jaccoud arthropathy. Lupus nephritis. (“Renal disease is the most feared complication of SLE.”)
Scleroderma. Raynaud’s phenomenon. (“Scleroderma is an uncommon disorder characterized by thickening of the skin and, to a greater or lesser degree, fibrosis of internal organs.”)
Henoch-Schönlein purpura. Cryoglobulinemia. (“Purpura are the result of a spontaneous extravasation of blood from the capillaries into the skin. If small they are known as petechiae, when they are large they are termed ecchymoses. There is an extensive differential diagnosis for purpura […] The combination of palpable purpura (distributed particularly over the buttocks and extensor surfaces of legs), abdominal pain, arthritis and renal disease is a classic presentation of Henoch–Schönlein purpura (HSP). HSP is a distinct and frequently self-limiting small-vessel vasculitis that can affect any age; but the majority of cases present in children aged 2–10 years, in whom the prognosis is more benign than the adult form, often remitting entirely within 3–4 months. The abdominal pain may mimic a surgical abdomen and can presage intussusception, haemorrhage or perforation. The arthritis, in contrast, is relatively mild and tends to affect the knees and ankles.”)
Rheumatic fever.
Erythema nodosum. (“Mild idiopathic erythema nodosum […] needs no specific treatment”).
Rheumatoid lung disease. Bronchiolitis obliterans. Methotrexate-induced pneumonitis. Hamman–Rich syndrome.
Antiphospholipid syndrome. Sapporo criteria. (“Antiphospholipid syndrome is a hypercoagulable state characterized by recurrent arteriovenous thrombosis and/or pregnancy morbidity in the presence of either a lupus anticoagulant or anticardiolipin antibody (both phospholipid-related proteins). […] The most common arteriovenous thrombotic events in antiphospholipid syndrome are deep venous thrombosis and pulmonary embolus […], but any part of the circulation may be involved, with arterial events such as myocardial infarction and stroke carrying a high mortality rate. Poor placental circulation is thought to be responsible for the high pregnancy morbidity, with recurrent first- and second-trimester loss and a higher rate of pre-eclampsia being typical clinical features.”)
Still’s disease. (“Consider inflammatory disease in cases of pyrexia of unknown origin.”)
Polymyalgia rheumatica. Giant cell arteritis. (“[P]olymyalgia rheumatica (PMR) [is] a systemic inflammatory syndrome affecting the elderly that is characterized by bilateral pain and stiffness in the shoulders and hip girdles. The stiffness can be profound and limits mobility although true muscle weakness is not a feature. […] The affected areas are diffusely tender, with movements limited by pain. […] care must be taken not to attribute joint inflammation to PMR until other diagnoses have been excluded; for example, a significant minority of RA patients may present with a polymyalgic onset. […] The treatment for PMR is low-dose corticosteroids. […] Many physicians would consider a dramatic response to low-dose prednisolone as almost diagnostic for PMR, so if a patients symptoms do not improve rapidly it is wise to re-evaluate the original diagnosis.”)
Relapsing polychondritis. (“Relapsing polychondritis is characterized histologically by inflammatory infiltration and later fibrosis of cartilage. Any cartilage, in any location, is at risk. […] Treatment of relapsing polychondritis is with corticosteroids […] Surgical reconstruction of collapsed structures is not an option as the deformity tends to continue postoperatively.”)
Dermatomyositis. Gottron’s Papules.
Enteropathic arthritis. (“A seronegative arthritis may develop in up to 15 per cent of patients with any form of inflammatory bowel disease, including ulcerative colitis (UC), Crohn’s disease or microscopic and collagenous colitis. The most common clinical presentations are a peripheral arthritis […] and spondyloarthritis.”)
Reflex sympathetic dystrophy.
Whipple’s disease. (“Although rare, consider Whipple’s disease in any patient presenting with malabsorption, weight loss and arthritis.”)
Wegener’s granulomatosis. (“Small-vessel vasculitis may cause a pulmonary-renal syndrome. […] The classic triad of Weneger’s granulomatosis is the presence of upper and lower respiratory tract disease and renal impairment.”)
Reactive arthritis. Reiter’s syndrome. (“Consider reactive arthritis in any patient presenting with a monoarthropathy. […] Reactive arthritis is generally benign, with up to 80 per cent making a full recovery.”)
Sarcoidosis. Löfgren syndrome.
Polyarteritis nodosa. (“Consider mesenteric ischaemia in any patient presenting with a systemic illness and postprandial abdominal pain.”)
Sjögren syndrome. Schirmer’s test.
Behçet syndrome.
Lyme disease. Erythema chronicum migrans. (“The combination of rash leading to arthralgia and cranial neuropathy is a classic presentation of Lyme disease.”)
Takayasu arteritis. (“Takayasu’s arteritis is an occlusive vasculitis leading to stenoses of the aorta and its principal branches. The symptoms and signs of the disease depend on the distribution of the affected vessel but upper limbs are generally affected more commonly than the iliac tributaries. […] the disease is a chronic relapsing and remitting condition […] The mainstay of treatment is high-dose corticosteroids plus a steroid-sparing agent such as methotrexate. […] Cyclophosphamide is reserved for those patients who do not achieve remission with standard therapy. Surgical intervention such as bypass or angioplasty may improve ischaemic symptoms once the inflammation is under control.”)
Lymphoma.
Haemarthrosis. (“Consider synovial tumours in a patient with unexplained haemarthrosis.”)
Juvenile idiopathic arthritis.
Drug-induced lupus erythematosus. (“Drug-induced lupus (DIL) generates a different spectrum of clinical manifestations from idiopathic disease. DIL is less severe than idiopathic SLE, and nephritis or central nervous system involvement is very rare. […] The most common drugs responsible for a lupus-like syndrome are procainamide, hydralazine, quinidine, isoniazid, methyldopa, chlorpromazine and minocycline. […] Treatment involves stopping the offending medication and the symptoms will gradually resolve.”)
Churg–Strauss syndrome.

July 8, 2018 Posted by | Books, Cancer/oncology, Cardiology, Gastroenterology, Immunology, Medicine, Nephrology, Neurology, Ophthalmology, Pharmacology | Leave a comment