Econstudentlog

Occupational Epidemiology (III)

This will be my last post about the book.

Some observations from the final chapters:

“Often there is confusion about the difference between systematic reviews and metaanalyses. A meta-analysis is a quantitative synthesis of two or more studies […] A systematic review is a synthesis of evidence on the effects of an intervention or an exposure which may also include a meta-analysis, but this is not a prerequisite. It may be that the results of the studies which have been included in a systematic review are reported in such a way that it is impossible to synthesize them quantitatively. They can then be reported in a narrative manner.10 However, a meta-analysis always requires a systematic review of the literature. […] There is a long history of debate about the value of meta-analysis for occupational cohort studies or other occupational aetiological studies. In 1994, Shapiro argued that ‘meta-analysis of published non-experimental data should be abandoned’. He reasoned that ‘relative risks of low magnitude (say, less than 2) are virtually beyond the resolving power of the epidemiological microscope because we can seldom demonstrably eliminate all sources of bias’.13 Because the pooling of studies in a meta-analysis increases statistical power, the pooled estimate may easily become significant and thus incorrectly taken as an indication of causality, even though the biases in the included studies may not have been taken into account. Others have argued that the method of meta-analysis is important but should be applied appropriately, taking into account the biases in individual studies.14 […] We believe that the synthesis of aetiological studies should be based on the same general principles as for intervention studies, and the existing methods adapted to the particular challenges of cohort and case-control studies. […] Since 2004, there is a special entity, the Cochrane Occupational Safety and Health Review Group, that is responsible for the preparing and updating of reviews of occupational safety and health interventions […]. There were over 100 systematic reviews on these topics in the Cochrane Library in 2012.”

“The believability of a systematic review’s results depends largely on the quality of the included studies. Therefore, assessing and reporting on the quality of the included studies is important. For intervention studies, randomized trials are regarded as of higher quality than observational studies, and the conduct of the study (e.g. in terms of response rate or completeness of follow-up) also influences quality. A conclusion derived from a few high-quality studies will be more reliable than when the conclusion is based on even a large number of low-quality studies. Some form of quality assessment is nowadays commonplace in intervention reviews but is still often missing in reviews of aetiological studies. […] It is tempting to use quality scores, such as the Jadad scale for RCTs34 and the Downs and Black scale for non-RCT intervention studies35 but these, in their original format, are insensitive to variation in the importance of risk areas for a given research question. The score system may give the same value to two studies (say, 10 out of 12) when one, for example, lacked blinding and the other did not randomize, thus implying that their quality is equal. This would not be a problem if randomization and blinding were equally important for all questions in all reviews, but this is not the case. For RCTs an important development in this regard has been the Cochrane risk of bias tool.36 This is a checklist of six important domains that have been shown to be important areas of bias in RCTs: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, and selective reporting.”

“[R]isks of bias tools developed for intervention studies cannot be used for reviews of aetiological studies without relevant modification. This is because, unlike interventions, exposures are usually more complicated to assess when we want to attribute the outcome to them alone. These scales do not cover all items that may need assessment in an aetiological study, such as confounding and information bias relating to exposures. […] Surprisingly little methodological work has been done to develop validated tools for aetiological epidemiology and most tools in use are not validated,38 […] Two separate checklists, for observational studies of incidence and prevalence and for risk factor assessment, have been developed and validated recently.40 […] Publication and other reporting bias is probably a much bigger issue for aetiological studies than for intervention studies. This is because, for clinical trials, the introduction of protocol registration, coupled with the regulatory system for new medications, has helped in assessing and preventing publication and reporting bias. No such checks exist for observational studies.”

“Most ill health that arises from occupational exposures can also arise from nonoccupational exposures, and the same type of exposure can occur in occupational and non-occupational settings. With the exception of malignant mesothelioma (which is essentially only caused by exposure to asbestos), there is no way to determine which exposure caused a particular disorder, nor where the causative exposure occurred. This means that usually it is not possible to determine the burden just by counting the number of cases. Instead, approaches to estimating this burden have been developed. There are also several ways to define burden and how best to measure it.”

“The population attributable fraction (PAF) is the proportion of cases that would not have occurred in the absence of an occupational exposure. It can be estimated by combining two measures — a risk estimate (usually relative risk (RR) or odds ratio) of the disorder of interest that is associated with exposure to the substance of concern; and an estimate of the proportion of the population exposed to the substance at work (p(E)). This approach has been used in several studies, particularly for estimating cancer burden […] There are several possible equations that can be used to calculate the PAF, depending on the available data […] PAFs cannot in general be combined by summing directly because: (1) summing PAFs for overlapping exposures (i.e. agents to which the same ‘ever exposed’ workers may have been exposed) may give an overall PAF exceeding 100%, and (2) summing disjoint (not concurrently occurring) exposures also introduces upward bias. Strategies to avoid this include partitioning exposed numbers between overlapping exposures […] or estimating only for the ‘dominant’ carcinogen with the highest risk. Where multiple exposures remain, one approach is to assume that the exposures are independent and their joint effects are multiplicative. The PAFs can then be combined to give an overall PAF for that cancer using a product sum. […] Potential sources of bias for PAFs include inappropriate choice of risk estimates, imprecision in the risk estimates and estimates of proportions exposed, inaccurate risk exposure period and latency assumptions, and a lack of separate risk estimates in some cases for women and/or cancer incidence. In addition, a key decision is the choice of which diseases and exposures are to be included.”

“The British Cancer Burden study is perhaps the most detailed study of occupationally related cancers in that it includes all those relevant carcinogens classified at the end of 2008 […] In the British study the attributable fractions ranged from less than 0.01% to 95% overall, the most important cancer sites for occupational attribution being, for men, mesothelioma (97%), sinonasal (46%), lung (21.1%), bladder (7.1%), and non-melanoma skin cancer (7.1%) and, for women, mesothelioma (83%), sinonasal (20.1%), lung (5.3%), breast (4.6%), and nasopharynx (2.5%). Occupation also contributed 2% or more overall to cancers of the larynx, oesophagus, and stomach, and soft tissue sarcoma with, in addition for men, melanoma of the eye (due to welding), and non-Hodgkin lymphoma. […] The overall results from the occupational risk factors component of the Global Burden of Disease 2010 study illustrate several important aspects of burden studies.14 Of the estimated 850 000 occupationally related deaths worldwide, the top three causes were: (1) injuries (just over a half of all deaths); (2) particulate matter, gases, and fumes leading to COPD; and (3) carcinogens. When DALYs were used as the burden measure, injuries still accounted for the highest proportion (just over one-third), but ergonomic factors leading to low back pain resulted in almost as many DALYs, and both were almost an order of magnitude higher than the DALYs from carcinogens. The difference in relative contributions of the various risk factors between deaths and DALYs arises because of the varying ages of those affected, and the differing chronicity of the resulting conditions. Both measures are valid, but they represent a different aspect of the burden arising from the hazardous exposures […]. Both the British and Global Burden of Disease studies draw attention to the important issues of: (1) multiple occupational carcinogens causing specific types of cancer, for example, the British study evaluated 21 lung carcinogens; and (2) specific carcinogens causing several different cancers, for example, IARC now defines asbestos as a group 1 or 2A carcinogen for seven cancer sites. These issues require careful consideration for burden estimation and for prioritizing risk reduction strategies. […] The long latency of many cancers means that estimates of current burden are based on exposures occurring in the past, often much higher than those existing today. […] long latency [also] means that risk reduction measures taken now will take a considerable time to be reflected in reduced disease incidence.”

“Exposures and effects are linked by dynamic processes occurring across time. These processes can often be usefully decomposed into two distinct biological relationships, each with several components: 1. The exposure-dose relationship […] 2. The dose-effect relationship […] These two component relationships are sometimes represented by two different mathematical models: a toxicokinetic model […], and a disease process model […]. Depending on the information available, these models may be relatively simple or highly complex. […] Often the various steps in the disease process do not occur at the same rate, some of these processes are ‘fast’, such as cell killing, while others are ‘slow’, such as damage repair. Frequently a few slow steps in a process become limiting to the overall rate, which sets the temporal pattern for the entire exposure-response relationship. […] It is not necessary to know the full mechanism of effects to guide selection of an exposure-response model or exposure metric. Because of the strong influence of the rate-limiting steps, often it is only necessary to have observations on the approximate time course of effects. This is true whether the effects appear to be reversible or irreversible, and whether damage progresses proportionately with each unit of exposure (actually dose) or instead occurs suddenly, and seemingly without regard to the amount of exposure, such as an asthma attack.”

“In this chapter, we argue that formal disease process models have the potential to improve the sensitivity of epidemiology for detecting new and emerging occupational and environmental risks where there is limited mechanistic information. […] In our approach, these models are often used to create exposure or dose metrics, which are in turn used in epidemiological models to estimate exposure-disease associations. […] Our goal is a methodology to formulate strong tests of our exposure-disease hypotheses in which a hypothesis is developed in as much biological detail as it can be, expressed in a suitable dynamic (temporal) model, and tested by its fit with a rich data set, so that its flaws and misperceptions of reality are fully displayed. Rejecting such a fully developed biological hypothesis is more informative than either rejecting or failing to reject a generic or vaguely defined hypothesis.” For example, the hypothesis ‘truck drivers have more risk of lung cancer than non-drivers’13 is of limited usefulness for prevention […]. Hypothesizing that a particular chemical agent in truck exhaust is associated with lung cancer — whether the hypothesis is refuted or supported by data — is more likely to lead to successful prevention activities. […] we believe that the choice of models against which to compare the data should, so far as possible, be guided by explicit hypotheses about the underlying biological processes. In other words, you can get as much as possible from epidemiology by starting from well-thought-out hypotheses that are formalized as mathematical models into which the data will be placed. The disease process models can serve this purpose.2″

“The basic idea of empirical Bayes (EB) and semiBayes (SB) adjustments for multiple associations is that the observed variation of the estimated relative risks around their geometric mean is larger than the variation of the true (but unknown) relative risks. In SB adjustments, an a priori value for the extra variation is chosen which assigns a reasonable range of variation to the true relative risks and this value is then used to adjust the observed relative risks.7 The adjustment consists in shrinking outlying relative risks towards the overall mean (of the relative risks for all the different exposures being considered). The larger the individual variance of the relative risks, the stronger the shrinkage, so that the shrinkage is stronger for less reliable estimates based on small numbers. Typical applications in which SB adjustments are a useful alternative to traditional methods of adjustment for multiple comparisons are in large occupational surveillance studies, where many relative risks are estimated with few or no a priori beliefs about which associations might be causal.7″

“The advantage of [the SB adjustment] approach over classical Bonferroni corrections is that on the average it produces more valid estimates of the odds ratio for each occupation/exposure. If we do a study which involves assessing hundreds of occupations, the problem is not only that we get many ‘false positive’ results by chance. A second problem is that even the ‘true positives’ tend to have odds ratios that are too high. For example, if we have a group of occupations with true odds ratios around 1.5, then the ones that stand out in the analysis are those with the highest odds ratios (e.g. 2.5) which will be elevated partly because of real effects and partly by chance. The Bonferroni correction addresses the first problem (too many chance findings) but not the second, that the strongest odds ratios are probably too high. In contrast, SB adjustment addresses the second problem by correcting for the anticipated regression to the mean that would have occurred if the study had been repeated, and thereby on the average produces more valid odds ratio estimates for each occupation/exposure. […] most epidemiologists write their Methods and Results sections as frequentists and their Introduction and Discussion sections as Bayesians. In their Methods and Results sections, they ‘test’ their findings as if their data are the only data that exist. In the Introduction and Discussion, they discuss their findings with regard to their consistency with previous studies, as well as other issues such as biological plausibility. This creates tensions when a small study has findings which are not statistically significant but which are consistent with prior knowledge, or when a study finds statistically significant findings which are inconsistent with prior knowledge. […] In some (but not all) instances, things can be made clearer if we include Bayesian methods formally in the Methods and Results sections of our papers”.

“In epidemiology, risk is most often quantified in terms of relative risk — i.e. the ratio of the probability of an adverse outcome in someone with a specified exposure to that in someone who is unexposed, or exposed at a different specified level. […] Relative risks can be estimated from a wider range of study designs than individual attributable risks. They have the advantage that they are often stable across different groups of people (e.g. of different ages, smokers, and non-smokers) which makes them easier to estimate and quantify. Moreover, high relative risks are generally unlikely to be explained by unrecognized bias or confounding. […] However, individual attributable risks are a more relevant measure by which to quantify the impact of decisions in risk management on individuals. […] Individual attributable risk is the difference in the probability of an adverse outcome between someone with a specified exposure and someone who is unexposed, or exposed at a different specified level. It is the critical measure when considering the impact of decisions in risk management on individuals. […] Population attributable risk is the difference in the frequency of an adverse outcome between a population with a given distribution of exposures to a hazardous agent, and that in a population with no exposure, or some other specified distribution of exposures. It depends on the prevalence of exposure at different levels within the population, and on the individual attributable risk for each level of exposure. It is a measure of the impact of the agent at a population level, and is relevant to decisions in risk management for populations. […] Population attributable risks are highest when a high proportion of a population is exposed at levels which carry high individual attributable risks. On the other hand, an exposure which carries a high individual attributable risk may produce only a small population attributable risk if the prevalence of such exposure is low.”

“Hazard characterization entails quantification of risks in relation to routes, levels, and durations of exposure. […] The findings from individual studies are often used to determine a no observed adverse effect level (NOAEL), lowest observed effect level (LOEL), or benchmark dose lower 95% confidence limit (BMDL) for relevant effects […] [NOAEL] is the highest dose or exposure concentration at which there is no discernible adverse effect. […] [LOEL] is the lowest dose or exposure concentration at which a discernible effect is observed. If comparison with unexposed controls indicates adverse effects at all of the dose levels in an experiment, a NOAEL cannot be derived, but the lowest dose constitutes a LOEL, which might be used as a comparator for estimated exposures or to derive a toxicological reference value […] A BMDL is defined in relation to a specified adverse outcome that is observed in a study. Usually, this is the outcome which occurs at the lowest levels of exposure and which is considered critical to the assessment of risk. Statistical modelling is applied to the experimental data to estimate the dose or exposure concentration which produces a specified small level of effect […]. The BMDL is the lower 95% confidence limit for this estimate. As such, it depends both on the toxicity of the test chemical […], and also on the sample sizes used in the study (other things being equal, larger sample sizes will produce more precise estimates, and therefore higher BMDLs). In addition to accounting for sample size, BMDLs have the merit that they exploit all of the data points in a study, and do not depend so critically on the spacing of doses that is adopted in the experimental design (by definition a NOAEL or LOEL can only be at one of the limited number of dose levels used in the experiment). On the other hand, BMDLs can only be calculated where an adverse effect is observed. Even if there are no clear adverse effects at any dose level, a NOAEL can be derived (it will be the highest dose administered).”

Advertisements

December 8, 2017 Posted by | Books, Cancer/oncology, Epidemiology, Medicine, Statistics | Leave a comment

Nuclear power (I)

I originally gave the book 2 stars, but after I had finished this post I changed that rating to 3 stars (which was not that surprising; already when I wrote my goodreads review shortly after having read the book I was conflicted about whether or not the book deserved the third star). One thing that kept me from giving the book a higher rating was that I thought that the author did not spend enough time on ‘the basic concepts’, a problem I also highlighted in my goodreads review. I’d fortunately recently covered some of those concepts in other books in the series, so it wasn’t too hard for me to follow what was going on, but as sometimes happens for authors of books in this series, I think the author simply was trying to cover too much stuff. But even so this is a nice introductory text on this topic.

I have added some links and quotes related to the first half or so of the book below. I prepared the link list before I started gathering quotes for my coverage, so there may be more overlap in terms of which topics are covered both in the quotes and the links than there usually is (I normally tend to reserve the links for topics and concepts which are covered in these books that I don’t find it necessary to cover in detail in the text – the links are meant to remind me/indicate which sort of topics are also covered in the book, aside from the topics included in the text coverage).

“According to Einstein’s mass–energy equation, the mass of any composite stable object has to be less than the sum of the masses of the parts; the difference is the binding energy of the object. […] The general features of the binding energies are simply understood as follows. We have seen that the measured radii of nuclei [increase] with the cube root of the mass number A. This is consistent with a structure of close packed nucleons. If each nucleon could only interact with its closest neighbours, the total binding energy would then itself be proportional to the number of nucleons. However, this would be an overestimate because nucleons at the surface of the nucleus would not have a complete set of nearest neighbours with which to interact […]. The binding energy would be reduced by the number of surface nucleons and this would be proportional to the surface area, itself proportional to A2/3. So far we have considered only the attractive short-range nuclear binding. However, the protons carry an electric charge and hence experience an electrical repulsion between each other. The electrical force between two protons is much weaker than the nuclear force at short distances but dominates at larger distances. Furthermore, the total electrical contribution increases with the number of pairs of protons.”

“The main characteristics of the empirical binding energy of nuclei […] can now be explained. For the very light nuclei, all the nucleons are in the surface, the electrical repulsion is negligible, and the binding energy increases as the volume and number of nucleons increases. Next, the surface effects start to slow the rate of growth of the binding energy yielding a region of most stable nuclei near charge number Z = 28 (iron). Finally, the electrical repulsion steadily increases until we reach the most massive stable nucleus (lead-208). Between iron and lead, not only does the binding energy decrease so also do the proton to neutron ratios since the neutrons do not experience the electrical repulsion. […] as the nuclei get heavier the Coulomb repulsion term requires an increasing number of neutrons for stability […] For an explanation of [the] peaks, we must turn to the quantum nature of the problem. […] Filled shells corresponded to particularly stable electronic structures […] In the nuclear case, a shell structure also exists separately for both the neutrons and the protons. […] Closed-shell nuclei are referred to as ‘magic number’ nuclei. […] there is a particular stability for nuclei with equal numbers of protons and neutrons.”

“As we move off the line of stable nuclei, by adding or subtracting neutrons, the isotopes become increasingly less stable indicated by increasing levels of beta radioactivity. Nuclei with a surfeit of neutrons emit an electron, hence converting one of the neutrons into a proton, while isotopes with a neutron deficiency can emit a positron with the conversion of a proton into a neutron. For the heavier nuclei, the neutron to proton ratio can be reduced by emitting an alpha particle. All nuclei heavier than lead are unstable and hence radioactive alpha emitters. […] The fact that almost all the radioactive isotopes heavier than lead follow [a] kind of decay chain and end up as stable isotopes of lead explains this element’s anomalously high natural abundance.”

“When two particles collide, they transfer energy and momentum between themselves. […] If the target is much lighter than the projectile, the projectile sweeps it aside with little loss of energy and momentum. If the target is much heavier than the projectile, the projectile simply bounces off the target with little loss of energy. The maximum transfer of energy occurs when the target and the projectile have the same mass. In trying to slow down the neutrons, we need to pass them through a moderator containing scattering centres of a similar mass. The obvious candidate is hydrogen, in which the single proton of the nucleus is the particle closest in mass to the neutron. At first glance, it would appear that water, with its low cost and high hydrogen content, would be the ideal moderator. There is a problem, however. Slow neutrons can combine with protons to form an isotope of hydrogen, deuterium. This removes neutrons from the chain reaction. To overcome this, the uranium fuel has to be enriched by increasing the proportion of uranium-235; this is expensive and technically difficult. An alternative is to use heavy water, that is, water in which the hydrogen is replaced by deuterium. It is not quite as effective as a moderator but it does not absorb neutrons. Heavy water is more expensive and its production more technically demanding than natural water. Finally, graphite (carbon) has a mass of 12 and hence is less efficient requiring a larger reactor core, but it is inexpensive and easily available.”

“[During the Manhattan Project,] Oak Ridge, Tennessee, was chosen as the facility to develop techniques for uranium enrichment (increasing the relative abundance of uranium-235) […] a giant gaseous diffusion facility was developed. Gaseous uranium hexafluoride was forced through a semi permeable membrane. The lighter isotopes passed through faster and at each pass through the membrane the uranium hexafluoride became more and more enriched. The technology is very energy consuming […]. At its peak, Oak Ridge consumed more electricity than New York and Washington DC combined. Almost one-third of all enriched uranium is still produced by this now obsolete technology. The bulk of enriched uranium today is produced in high-speed centrifuges which require much less energy.”

“In order to sustain a nuclear chain reaction, it is essential to have a critical mass of fissile material. This mass depends upon the fissile fuel being used and the topology of the structure containing it. […] The chain reaction is maintained by the neutrons and many of these leave the surface without contributing to the reaction chain. Surrounding the fissile material with a blanket of neutron reflecting material, such as beryllium metal, will keep the neutrons in play and reduce the critical mass. Partially enriched uranium will have an increased critical mass and natural uranium (0.7% uranium-235) will not go critical at any mass without a moderator to increase the number of slow neutrons which are the dominant fission triggers. The critical mass can also be decreased by compressing the fissile material.”

“It is now more than 50 years since operations of the first civil nuclear reactor began. In the intervening years, several hundred reactors have been operating, in total amounting to nearly 50 million hours of experience. This cumulative experience has led to significant advances in reactor design. Different reactor types are defined by their choice of fuel, moderator, control rods, and coolant systems. The major advances leading to greater efficiency, increased economy, and improved safety are referred to as ‘generations’. […] [F]irst generation reactors […] had the dual purpose to make electricity for public consumption and plutonium for the Cold War stockpiles of nuclear weapons. Many of the features of the design were incorporated to meet the need for plutonium production. These impacted on the electricity-generating cost and efficiency. The most important of these was the use of unenriched uranium due to the lack of large-scale enrichment plants in the UK, and the high uranium-238 content was helpful in the plutonium production but made the electricity generation less efficient.”

PWRs, BWRs, and VVERs are known as LWRs (Light Water Reactors). LWRs dominate the world’s nuclear power programme, with the USA operating 69 PWRs and 35 BWRs; Japan operates 63 LWRs, the bulk of which are BWRs; and France has 59 PWRs. Between them, these three countries generate 56% of the world’s nuclear power. […] In the 1990s, a series of advanced versions of the Generation II and III reactors began to receive certification. These included the ACR (Advanced CANDU Reactor), the EPR (European Pressurized Reactor), and Westinghouse AP1000 and APR1400 reactors (all developments of the PWR) and ESBWR (a development of the BWR). […] The ACR uses slightly enriched uranium and a light water coolant, allowing the core to be halved in size for the same power output. […] It would appear that two of the Generation III+ reactors, the EPR […] and AP1000, are set to dominate the world market for the next 20 years. […] […] the EPR is considerably safer than current reactor designs. […] A major advance is that the generation 3+ reactors produce only about 10 % of waste compared with earlier versions of LWRs. […] China has officially adopted the AP1000 design as a standard for future nuclear plants and has indicated a wish to see 100 nuclear plants under construction or in operation by 2020.”

“All thermal electricity-generating systems are examples of heat engines. A heat engine takes energy from a high-temperature environment to a low-temperature environment and in the process converts some of the energy into mechanical work. […] In general, the efficiency of the thermal cycle increases as the temperature difference between the low-temperature environment and the high-temperature environment increases. In PWRs, and nearly all thermal electricity-generating plants, the efficiency of the thermal cycle is 30–35%. At the much higher operating temperatures of Generation IV reactors, typically 850–10000C, it is hoped to increase this to 45–50%.
During the operation of a thermal nuclear reactor, there can be a build-up of fission products known as reactor poisons. These are materials with a large capacity to absorb neutrons and this can slow down the chain reaction; in extremes, it can lead to a complete close-down. Two important poisons are xenon-135 and samarium-149. […] During steady state operation, […] xenon builds up to an equilibrium level in 40–50 hours when a balance is reached between […] production […] and the burn-up of xenon by neutron capture. If the power of the reactor is increased, the amount of xenon increases to a higher equilibrium and the process is reversed if the power is reduced. If the reactor is shut down the burn-up of xenon ceases, but the build-up of xenon continues from the decay of iodine. Restarting the reactor is impeded by the higher level of xenon poisoning. Hence it is desirable to keep reactors running at full capacity as long as possible and to have the capacity to reload fuel while the reactor is on line. […] Nuclear plants operate at highest efficiency when operated continually close to maximum generating capacity. They are thus ideal for provision of base load. If their output is significantly reduced, then the build-up of reactor poisons can impact on their efficiency.”

Links:

Radioactivity. Alpha decay. Beta decay. Gamma decay. Free neutron decay.
Periodic table.
Rutherford scattering.
Isotope.
Neutrino. Positron. Antineutrino.
Binding energy.
Mass–energy equivalence.
Electron shell.
Decay chain.
Heisenberg uncertainty principle.
Otto Hahn. Lise Meitner. Fritz Strassman. Enrico Fermi. Leo Szilárd. Otto Frisch. Rudolf Peierls.
Uranium 238. Uranium 235. Plutonium.
Nuclear fission.
Chicago Pile 1.
Manhattan Project.
Uranium hexafluoride.
Heavy water.
Nuclear reactor coolant. Control rod.
Critical mass. Nuclear chain reaction.
Magnox reactor. UNGG reactor. CANDU reactor.
ZEEP.
Nuclear reactor classifications (a lot of the distinctions included in this article are also included in the book and described in some detail. The topics included here are also covered extensively).
USS Nautilus.
Nuclear fuel cycle.
Thorium-based nuclear power.
Heat engine. Thermodynamic cycle. Thermal efficiency.
Reactor poisoning. Xenon 135. Samarium 149.
Base load.

December 7, 2017 Posted by | Books, Chemistry, Engineering, Physics | Leave a comment

Occupational Epidemiology (II)

Some more observations from the book below.

“RD [Retinal detachment] is the separation of the neurosensory retina from the underlying retinal pigment epithelium.1 RD is often preceded by posterior vitreous detachment — the separation of the posterior vitreous from the retina as a result of vitreous degeneration and shrinkage2 — which gives rise to the sudden appearance of floaters and flashes. Late symptoms of RD may include visual field defects (shadows, curtains) or even blindness. The success rate of RD surgery has been reported to be over 90%;3 however, a loss of visual acuity is frequently reported by patients, particularly if the macula is involved.4 Since the natural history of RD can be influenced by early diagnosis, patients experiencing symptoms of posterior vitreous detachment are advised to undergo an ophthalmic examination.5 […] Studies of the incidence of RD give estimates ranging from 6.3 to 17.9 cases per 100 000 person-years.6 […] Age is a well-known risk factor for RD. In most studies the peak incidence was recorded among subjects in their seventh decade of life. A secondary peak at a younger age (20–30 years) has been identified […] attributed to RD among highly myopic patients.6 Indeed, depending on the severity,
myopia is associated with a four- to ten-fold increase in risk of RD.7 [Diabetics with retinopathy are also at increased risk of RD, US] […] While secondary prevention of RD is current practice, no effective primary prevention strategy is available at present. The idea is widespread among practitioners that RD is not preventable, probably the consequence of our historically poor understanding of the aetiology of RD. For instance, on the website of the Mayo Clinic — one of the top-ranked hospitals for ophthalmology in the US — it is possible to read that ‘There’s no way to prevent retinal detachment’.9

“Intraocular pressure […] is influenced by physical activity. Dynamic exercise causes an acute reduction in intraocular pressure, whereas physical fitness is associated with a lower baseline value.29 Conversely, a sudden rise in intraocular pressure has been reported during the Valsalva manoeuvre.30-32 […] Occupational physical activity may […] cause both short- and long-term variations in intraocular pressure. On the one hand, physically demanding jobs may contribute to decreased baseline levels by increasing physical fitness but, on the other hand, lifting tasks may cause an important acute increase in pressure. Moreover, the eye of a manual worker who performs repeated lifting tasks involving the Valsalva manoeuvre may undergo several dramatic changes in intraocular pressure within a single working shift. […] A case-control study was carried out to test the hypothesis that repeated lifting tasks involving the Valsalva manoeuvre could be a risk factor for RD. […] heavy lifting was a strong risk factor for RD (OR 4.4, 95% CI 1.6–13). Intriguingly, body mass index (BMI) also showed a clear association with RD (top quartile: OR 6.8, 95% CI 1.6–29). […] Based on their findings, the authors concluded that heavy occupational lifting (involving the Valsalva manoeuvre) may be a relevant risk factor for RD in myopics.

“The proportion of the world’s population over 60 is forecast to double from 11.6% in 2012 to 21.8% in 2050.1 […] the International Labour Organization notes that, worldwide, just 40% of the working age population has legal pension coverage, and only 26% of the working population is effectively covered by old-age pension schemes. […] in less developed regions, labour force participation in those over 65 is much higher than in more developed regions.8 […] Longer working lives increase cumulative exposures, as well as increasing the time since exposure — important when there is a long latency period between exposure and resultant disease. Further, some exposures may have a greater effect when they occur to older workers, e.g. carcinogens that are promoters rather than initiators. […] Older workers tend to have more chronic health conditions. […] Older workers have fewer injuries, but take longer to recover. […] For some ‘knowledge workers’, like physicians, even a relatively minor cognitive decline […] might compromise their competence. […]  Most past studies have treated age as merely a confounding variable and rarely, if ever, have considered it an effect modifier. […]  Jex and colleagues24 argue that conceptually we should treat age as the variable of interest so that other variables are viewed as moderating the impact of age. […] The single best improvement to epidemiological research on ageing workers is to conduct longitudinal studies, including follow-up of workers into retirement. Cross-sectional designs almost certainly incur the healthy survivor effect, since unhealthy workers may retire early.25 […] Analyses should distinguish ageing per se, genetic factors, work exposures, and lifestyle in order to understand their relative and combined effects on health.”

“Musculoskeletal disorders have long been recognized as an important source of morbidity and disability in many occupational populations.1,2 Most musculoskeletal disorders, for most people, are characterized by recurrent episodes of pain that vary in severity and in their consequences for work. Most episodes subside uneventfully within days or weeks, often without any intervention, though about half of people continue to experience some pain and functional limitations after 12 months.3,4 In working populations, musculoskeletal disorders may lead to a spell of sickness absence. Sickness absence is increasingly used as a health parameter of interest when studying the consequences of functional limitations due to disease in occupational groups. Since duration of sickness absence contributes substantially to the indirect costs of illness, interventions increasingly address return to work (RTW).5 […] The Clinical Standards Advisory Group in the United Kingdom reported RTW within 2 weeks for 75% of all low back pain (LBP) absence episodes and suggested that approximately 50% of all work days lost due to back pain in the working population are from the 85% of people who are off work for less than 7 days.6″

Any RTW curve over time can be described with a mathematical Weibull function.15 This Weibull function is characterized by a scale parameter λ and a shape parameter k. The scale parameter λ is a function of different covariates that include the intervention effect, preferably expressed as hazard ratio (HR) between the intervention group and the reference group in a Cox’s proportional hazards regression model. The shape parameter k reflects the relative increase or decrease in survival time, thus expressing how much the RTW rate will decrease with prolonged sick leave. […] a HR as measure of effect can be introduced as a covariate in the scale parameter λ in the Weibull model and the difference in areas under the curve between the intervention model and the basic model will give the improvement in sickness absence days due to the intervention. By introducing different times of starting the intervention among those workers still on sick leave, the impact of timing of enrolment can be evaluated. Subsequently, the estimated changes in total sickness absence days can be expressed in a benefit/cost ratio (BC ratio), where benefits are the costs saved due to a reduction in sickness absence and costs are the expenditures relating to the intervention.15″

“A crucial factor in understanding why interventions are effective or not is the timing of the enrolment of workers on sick leave into the intervention. The RTW pattern over time […] has important consequences for appropriate timing of the best window for effective clinical and occupational interventions. The evidence presented by Palmer and colleagues clearly suggests that [in the context of LBP] a stepped care approach is required. In the first step of rapid RTW, most workers will return to work even without specific interventions. Simple, short interventions involving effective coordination and cooperation between primary health care and the workplace will be sufficient to help the majority of workers to achieve an early RTW. In the second step, more expensive, structured interventions are reserved for those who are having difficulties returning, typically between 4 weeks and 3 months. However, to date there is little evidence on the optimal timing of such interventions for workers on sick leave due to LBP.14,15 […] the cost-benefits of a structured RTW intervention among workers on sick leave will be determined by the effectiveness of the intervention, the natural speed of RTW in the target population, the timing of the enrolment of workers into the intervention, and the costs of both the intervention and of a day of sickness absence. […] The cost-effectiveness of a RTW intervention will be determined by the effectiveness of the intervention, the costs of the intervention and of a day of sickness absence, the natural course of RTW in the target population, the timing of the enrolment of workers into the RTW intervention, and the time lag before the intervention takes effect. The latter three factors are seldom taken into consideration in systematic reviews and guidelines for management of RTW, although their impact may easily be as important  as classical measures of effectiveness, such as effect size or HR.”

“In order to obtain information of the highest quality and utility, surveillance schemes have to be designed, set up, and managed with the same methodological rigour as high-calibre prospective cohort studies. Whether surveillance schemes are voluntary or not, considerable effort has to be invested to ensure a satisfactory and sufficient denominator, the best numerator quality, and the most complete ascertainment. Although the force of statute is relied upon in some surveillance schemes, even in these the initial and continuing motivation of the reporters (usually physicians) is paramount. […] There is a surveillance ‘pyramid’ within which the patient’s own perception is at the base, the GP is at a higher level, and the clinical specialist is close to the apex. The source of the surveillance reports affects the numerator because case severity and case mix differ according to the level in the pyramid.19 Although incidence rate estimates may be expected to be lower at the higher levels in the surveillance pyramid this is not necessarily always the case. […] Although surveillance undertaken by physicians who specialize in the organ system concerned or in occupational disease (or in both aspects) may be considered to be the medical ‘gold standard’ it can suffer from a more limited patient catchment because of various referral filters. Surveillance by GPs will capture numerator cases as close to the base of the pyramid as possible, but may suffer from greater diagnostic variation than surveillance by specialists. Limiting recruitment to GPs with a special interest, and some training, in occupational medicine is a compromise between the two levels.20

“When surveillance is part of a statutory or other compulsory scheme then incident case identification is a continuous and ongoing process. However, when surveillance is voluntary, for a research objective, it may be preferable to sample over shorter, randomly selected intervals, so as to reduce the demands associated with the data collection and ‘reporting fatigue’. Evidence so far suggests that sampling over shorter time intervals results in higher incidence estimates than continuous sampling.21 […] Although reporting fatigue is an important consideration in tempering conclusions drawn from […] multilevel models, it is possible to take account of this potential bias in various ways. For example, when evaluating interventions, temporal trends in outcomes resulting from other exposures can be used to control for fatigue.23,24 The phenomenon of reporting fatigue may be characterized by an ‘excess of zeroes’ beyond what is expected of a Poisson distribution and this effect can be quantified.27 […] There are several considerations in determining incidence from surveillance data. It is possible to calculate an incidence rate based on the general population, on the population of working age, or on the total working population,19 since these denominator bases are generally readily available, but such rates are not the most useful in determining risk. Therefore, incidence rates are usually calculated in respect of specific occupations or industries.22 […] Ideally, incidence rates should be expressed in relation to quantitative estimates of exposure but most surveillance schemes would require additional data collection as special exercises to achieve this aim.” [for much more on these topics, see also M’ikanatha & Iskander’s book.]

“Estimates of lung cancer risk attributable to occupational exposures vary considerably by geographical area and depend on study design, especially on the exposure assessment method, but may account for around 5–20% of cancers among men, but less (<5%) among women;2 among workers exposed to (suspected) lung carcinogens, the percentage will be higher. […] most exposure to known lung carcinogens originates from occupational settings and will affect millions of workers worldwide.  Although it has been established that these agents are carcinogenic, only limited evidence is available about the risks encountered at much lower levels in the general population. […] One of the major challenges in community-based occupational epidemiological studies has been valid assessment of the occupational exposures experienced by the population at large. Contrary to the detailed information usually available for an industrial population (e.g. in a retrospective cohort study in a large chemical company) that often allows for quantitative exposure estimation, community-based studies […] have to rely on less precise and less valid estimates. The choice of method of exposure assessment to be applied in an epidemiological study depends on the study design, but it boils down to choosing between acquiring self-reported exposure, expert-based individual exposure assessment, or linking self-reported job histories with job-exposure matrices (JEMs) developed by experts. […] JEMs have been around for more than three decades.14 Their main distinction from either self-reported or expert-based exposure assessment methods is that exposures are no longer assigned at the individual subject level but at job or task level. As a result, JEMs make no distinction in assigned exposure between individuals performing the same job, or even between individuals performing a similar job in different companies. […] With the great majority of occupational exposures having a rather low prevalence (<10%) in the general population it is […] extremely important that JEMs are developed aiming at a highly specific exposure assessment so that only jobs with a high likelihood (prevalence) and intensity of exposure are considered to be exposed. Aiming at a high sensitivity would be disastrous because a high sensitivity would lead to an enormous number of individuals being assigned an exposure while actually being unexposed […] Combinations of the methods just described exist as well”.

“Community-based studies, by definition, address a wider range of types of exposure and a much wider range of encountered exposure levels (e.g. relatively high exposures in primary production but often lower in downstream use, or among indirectly exposed individuals). A limitation of single community-based studies is often the relatively low number of exposed individuals. Pooling across studies might therefore be beneficial. […] Pooling projects need careful planning and coordination, because the original studies were conducted for different purposes, at different time periods, using different questionnaires. This heterogeneity is sometimes perceived as a disadvantage but also implies variations that can be studied and thereby provide important insights. Every pooling project has its own dynamics but there are several general challenges that most pooling projects confront. Creating common variables for all studies can stretch from simple re-naming of variables […] or recoding of units […] to the re-categorization of national educational systems […] into years of formal education. Another challenge is to harmonize the different classification systems of, for example, diseases (e.g. International Classification of Disease (ICD)-9 versus ICD-10), occupations […], and industries […]. This requires experts in these respective fields as well as considerable time and money. Harmonization of data may mean losing some information; for example, ISCO-68 contains more detail than ISCO-88, which makes it possible to recode ISCO-68 to ISCO-88 with only a little loss of detail, but it is not possible to recode ISCO-88 to ISCO-68 without losing one or two digits in the job code. […] Making the most of the data may imply that not all studies will qualify for all analyses. For example, if a study did not collect data regarding lung cancer cell type, it can contribute to the overall analyses but not to the cell type-specific analyses. It is important to remember that the quality of the original data is critical; poor data do not become better by pooling.”

December 6, 2017 Posted by | Books, Cancer/oncology, Demographics, Epidemiology, Health Economics, Medicine, Ophthalmology, Statistics | Leave a comment

The history of astronomy

It’s been a while since I read this book, and I was for a while strongly considering not blogging it at all. In the end I figured I ought to cover it after all in at least a little bit of detail, though when I made the decision to cover the book here I also decided not to cover it in nearly as much detail as I usually cover the books in this series.

Below some random observations from the book which I found sufficiently interesting to add here.

“The Almagest is a magisterial work that provided geometrical models and related tables by which the movements of the Sun, Moon, and the five lesser planets could be calculated for the indefinite future. […] Its catalogue contains over 1,000 fixed stars arranged in 48 constellations, giving the longitude, latitude, and apparent brightness of each. […] the Almagest would dominate astronomy like a colossus for 14 centuries […] In the universities of the later Middle Ages, students would be taught Aristotle in philosophy and a simplified Ptolemy in astronomy. From Aristotle they would learn the basic truth that the heavens rotate uniformly about the central Earth. From the simplified Ptolemy they would learn of epicycles and eccentrics that violated this basic truth by generating orbits whose centre was not the Earth; and those expert enough to penetrate deeper into the Ptolemaic models would encounter equant theories that violated the (yet more basic) truth that heavenly motion is uniform. […] with the models of the Almagest – whose parameters would be refined over the centuries to come – the astronomer, and the astrologer, could compute the future positions of the planets with economy and reasonable accuracy. There were anomalies – the Moon, for example, would vary its apparent size dramatically in the Ptolemaic model but does not do so in reality, and Venus and Mercury were kept close to the Sun in the sky by a crude ad hoc device – but as a geometrical compendium of how to grind out planetary tables, the Almagest worked, and that was what mattered.”

“The revival of astronomy – and astrology – among the Latins was stimulated around the end of the first millennium when the astrolabe entered the West from Islamic Spain. Astrology in those days had a [‘]rational[‘] basis rooted in the Aristotelian analogy between the microcosm – the individual living body – and the macrocosm, the cosmos as a whole. Medical students were taught how to track the planets, so that they would know when the time was favourable for treating the corresponding organs in their patients.” [Aaargh! – US]

“The invention of printing in the 15th century had many consequences, none more significant than the stimulus it gave to the mathematical sciences. All scribes, being human, made occasional errors in preparing a copy of a manuscript. These errors would often be transmitted to copies of the copy. But if the works were literary and the later copyists attended to the meaning of the text, they might recognize and correct many of the errors introduced by their predecessors. Such control could rarely be exercised by copyists required to reproduce texts with significant numbers of mathematical symbols. As a result, a formidable challenge faced the medieval student of a mathematical or astronomical treatise, for it was available to him only in a manuscript copy that had inevitably become corrupt in transmission. After the introduction of printing, all this changed.”

“Copernicus, like his predecessors, had been content to work with observations handed down from the past, making new ones only when unavoidable and using instruments that left much to be desired. Tycho [Brahe], whose work marks the watershed between observational astronomy ancient and modern, saw accuracy of observation as the foundation of all good theorizing. He dreamed of having an observatory where he could pursue the research and development of precision instrumentation, and where a skilled team of assistants would test the instruments even as they were compiling a treasury of observations. Exploiting his contacts at the highest level, Tycho persuaded King Frederick II of Denmark to grant him the fiefdom of the island of Hven, and there, between 1576 and 1580, he constructed Uraniborg (‘Heavenly Castle’), the first scientific research institution of the modern era. […] Tycho was the first of the modern observers, and in his catalogue of 777 stars the positions of the brightest are accurate to a minute or so of arc; but he himself was probably most proud of his cosmology, which Galileo was not alone in seeing as a retrograde compromise. Tycho appreciated the advantages of heliocentic planetary models, but he was also conscious of the objections […]. In particular, his inability to detect annual parallax even with his superb instrumentation implied that the Copernican excuse, that the stars were too far away for annual parallax to be detected, was now implausible in the extreme. The stars, he calculated, would have to be at least 700 times further away than Saturn for him to have failed for this reason, and such a vast, purposeless empty space between the planets and the stars made no sense. He therefore looked for a cosmology that would have the geometrical advantages of the heliocentric models but would retain the Earth as the body physically at rest at the centre of the cosmos. The solution seems obvious in hindsight: make the Sun (and Moon) orbit the central Earth, and make the five planets into satellites of the Sun.”

“Until the invention of the telescope, each generation of astronomers had looked at much the same sky as their predecessors. If they knew more, it was chiefly because they had more books to read, more records to mine. […] Galileo could say of his predecessors, ‘If they had seen what we see, they would have judged as we judge’; and ever since his time, the astronomers of each generation have had an automatic advantage over their predecessors, because they possess apparatus that allows them access to objects unseen, unknown, and therefore unstudied in the past. […] astronomers [for a long time] found themselves in a situation where, as telescopes improved, the two coordinates of a star’s position on the heavenly sphere were being measured with ever increasing accuracy, whereas little was known of the star’s third coordinate, distance, except that its scale was enormous. Even the assumption that the nearest stars were the brightest was […rightly, US] being called into question, as the number of known proper motions increased and it emerged that not all the fastest-moving stars were bright.”

“We know little of how Newton’s thinking developed between 1679 and the visit from Halley in 1684, except for a confused exchange of letters between Newton and the Astronomer Royal, John Flamsteed […] the visit from the suitably deferential and tactful Halley encouraged Newton to promise him written proof that elliptical orbits would result from an inverse-square force of attraction residing in the Sun. The drafts grew and grew, and eventually resulted in The Mathematical Principles of Natural Philosophy (1687), better known in its abbreviated Latin title of the Principia. […] All three of Kepler’s laws (the second in ‘area’ form), which had been derived by their author from observations, with the help of a highly dubious dynamics, were now shown to be consequences of rectilinear motion under an inverse-square force. […] As the drafts of Principia multiplied, so too did the number of phenomena that at last found their explanation. The tides resulted from the difference between the effects on the land and on the seas of the attraction of Sun and Moon. The spinning Earth bulged at the equator and was flattened at the poles, and so was not strictly spherical; as a result, the attraction of Sun and Moon caused the Earth’s axis to wobble and so generated the precession of the equinoxes first noticed by Hipparchus. […] Newton was able to use the observed motions of the moons of Earth, Jupiter, and Saturn to calculate the masses of the parent planets, and he found that Jupiter and Saturn were huge compared to Earth – and, in all probability, to Mercury, Venus, and Mars.”

December 5, 2017 Posted by | Astronomy, Books, History, Mathematics, Physics | Leave a comment

Occupational Epidemiology (I)

Below some observations from the first chapters of the book, which I called ‘very decent’ on goodreads.

“Coal workers were amongst the first occupational groups to be systematically studied in well-designed epidemiological research programmes. As a result, the causes and spectrum of non-malignant respiratory disease among coal workers have been rigorously explored and characterized.1,2 While respirable silica (quartz) in mining has long been accepted as a cause of lung disease, the important contributing role of coal mine dust was questioned until the middle of the twentieth century.3 Occupational exposure to coal mine dust has now been shown unequivocally to cause excess mortality and morbidity from non-malignant respiratory disease, including coal workers’ pneumoconiosis (CWP) and chronic obstructive pulmonary disease (COPD). The presence of respirable quartz, often a component of coal mine dust, contributes to disease incidence and severity, increasing the risk of morbidity and mortality in exposed workers.”

Coal is classified into three major coal ranks: lignite, bituminous, and anthracite from lowest to highest carbon content and heating value. […] In the US, the Bureau of Mines and the Public Health Service actively studied anthracite and bituminous coal mines and miners throughout the mid-1900s.3 These studies showed significant disease among workers with minimal silica exposure, suggesting that coal dust itself was toxic; however, these results were suppressed and not widely distributed. It was not until the 1960s that a popular movement of striking coal miners and their advocates demanded legislation to prevent, study, and compensate miners for respiratory diseases caused by coal dust exposure. […] CWP [Coal Workers’ Pneumoconiosis] is an interstitial lung disease resulting from the accumulation of coal mine dust in miners’ lungs and the tissue reaction to its presence. […] It is classified […] as simple or complicated; the latter is also known as progressive massive fibrosis (PMF) […] PMF is a progressive, debilitating disease which is predictive of disability and mortality […] A causal exposure-response relationship has been established between cumulative coal mine dust exposure and risk of developing both CWP and PMF,27-31 and with mortality from pneumoconiosis and PMF.23-26, 30 Incidence, the stage of CWP, and progression to PMF, as well as mortality, are positively associated with increasing proportion of respirable silica in the coal mine dust32 and higher coal rank. […] Not only do coal workers experience occupational mortality from CWP and PMF,12, 23-26 they also have excess mortality from COPD compared to the general population. Cross-sectional and longitudinal studies […] have demonstrated an exposure-response relationship between cumulative coal mine dust exposure and chronic bronchitis,36-40 respiratory symptoms,41 and pulmonary function even in the presence of normal radiographic findings.42 The relationship between the rate of decline of lung function and coal mine dust exposure is not linear, the greatest reduction occurring in the first few years of exposure.43

“Like most occupational cohort studies, those of coal workers are affected by the healthy worker effect. A strength of the PFR and NCS studies is the ability to use internal analysis (i.e. comparing workers by exposure level) which controls for selection bias at hire, one component of the effect.59 However, internal analyses may not fully control for ongoing selection bias if symptoms of adverse health effects are related to exposure (referred to as the healthy worker survivor effect) […] Work status is a key component of the healthy worker survivor effect, as are length of time since entering the industry and employment duration.61 Both the PFR and NCS studies have consistently found higher rates of symptoms and disease among former miners compared with current miners, consistent with a healthy worker survivor effect.62,63″

“Coal mining is rapidly expanding in the developing world. From 2007 to 2010 coal production declined in the US by 6% and Europe by 10% but increased in Eurasia by 9%, in Africa by 3%, and in Asia and Oceania by 19%.71 China saw a dramatic increase of 39% from 2007 to 2011. There have been few epidemiological studies published that characterize the disease burden among coal workers during this expansion but, in one study conducted among miners in Liaoning Province, China, rates of CWP were high.72 There are an estimated six million underground miners in China at present;73 hence even low disease rates will cause a high burden of illness and excess premature mortality.”

“Colonization with S. aureus may occur on mucous membranes of the respiratory or intestinal tract, or on other body surfaces, and is usually asymptomatic. Nasal colonization with S. aureus in the human population occurs among around 30% of individuals. Methicillin-resistant S. aureus (MRSA) are strains that have developed resistance to beta-lactam antibiotics […] and, as a result, may cause difficult-to-treat infections in humans. Nasal colonization with MRSA in the general population is low; the highest rate reported in a population-based survey was 1.5%.2,3 Infections with MRSA are associated with treatment failure and increased severity of disease.4,5 […] In 2004 a case of, at that time non-typeable, MRSA was reported in a 6-month-old girl admitted to a hospital in the Netherlands. […] Later on, this strain and some related strains appeared strongly associated with livestock production, and were labelled livestock-associated MRSA (LA-MRSA) and are nowadays referred to as MRSA ST398. […] It is common knowledge that the use of antimicrobial agents in humans, animals, and plants promotes the selection and spread of antimicrobial-resistant bacteria and resistance genes through genetic mutations and gene transfer.15 Antimicrobial agents are widely used in veterinary medicine and modern food animal production depends on the use of large amounts of antimicrobials for disease control. Use of antimicrobials probably played an important role in the emergence of MRSA ST398.”

MRSA was rarely isolated from animals before 2000. […] Since 2005 onwards, LA-MRSA has been increasingly frequently reported in different food production animals, including cattle, pigs, and poultry […] The MRSA case illustrates the rapid emergence, and transmission from animals to humans, of a new strain of resistant micro-organisms from an animal reservoir, creating risks for different occupational groups. […] High animal-to-human transmission of ST398 has been reported in pig farming, leading to an elevated prevalence of nasal MRSA carriage ranging from a few per cent in Ireland up to 86% in German pig farmers […]. One study showed a clear association between the prevalence of MRSA carriage among participants from farms with MRSA colonized pigs (50%) versus 3% on farms without colonized pigs […] MRSA prevalence is low among animals from alternative breeding systems with low use of antimicrobials, also leading to low carriage rates in farmers.71 […] Veterinarians are […] frequently in direct contact with livestock, and are clearly at elevated risk of LA-MRSA carriage when compared to the general population. […] Of all LA-MRSA carrying individuals, a fraction appear to be persistent carriers. […] Few studies have examined transmission from humans to humans. Generally, studies among family members of livestock farmers show a considerably lower prevalence than among the farmers with more intense animal contact. […] Individuals who are ST398 carriers in the general population usually have direct animal contact.43,44 On the other hand, the emergence of ST398 isolates without known risk factors for acquisition and without a link to livestock has been reported.45 In addition, a human-specific ST398 clone has recently been identified and thus the spread of LA-MRSA from occupational populations to the general population cannot be ruled out.46 Transmission dynamics, especially between humans not directly exposed to animals, remain unclear and might be changing.”

“Enterobacteriaceae that produce ESBLs are an emerging concern in public health. ESBLs inactivate beta-lactam antimicrobials by hydrolysis and therefore cause resistance to various beta-lactam antimicrobials, including penicillins and cephalosporins.54 […] The genes encoding for ESBLs are often located on plasmids which can be transferred between different bacterial species. Also, coexistence with other types of antimicrobial resistance occurs. In humans, infections with ESBL-producing Enterobacteriaceae are associated with increased burden of disease and costs.58 A variety of ESBLs have been identified in bacteria derived from food-producing animals worldwide. The occurrence of different ESBL types depends on the animal species and the geographical area. […] High use of antimicrobials and inappropriate use of cephalosporins in livestock production are considered to be associated with the emergence and high prevalence of ESBL-producers in the animals.59-60 Food-producing animals can serve as a reservoir for ESBL producing Enterobacteriaceae and ESBL genes. […] recent findings suggest that transmission from animals to humans may occur through (in)direct contact with livestock during work. This may thus pose an occupational health risk for farmers and potentially for other humans with regular contact with this working population. […] Compared to MRSA, the dynamics of ESBLs seem more complex. […] The variety of potential ESBL transmission routes makes it complex to determine the role of direct contact with livestock as an occupational risk for ESBL carriage. However, the increasing occurrence of ESBLs in livestock worldwide and the emerging insight into transmission through direct contact suggests that farmers have a higher risk of becoming a carrier of ESBLs. Until now, there have not been sufficient data available to quantify the relevant importance of this route of transmission.”

“Welders die more often from pneumonia than do their social class peers. This much has been revealed by successive analyses of occupational mortality for England and Wales. The pattern can now be traced back more than seven decades. During 1930–32, 285 deaths were observed with 171 expected;3 in 1949–53, 70 deaths versus 31 expected;4 in 1959–63, 101 deaths as compared with 54.9 expected;5 and in 1970–72, 66 deaths with 42.0 expected.6 […] The finding that risks decline after retirement is an argument against confounding by lifestyle variables such as smoking, as is the specificity of effect to lobar rather than bronchopneumonia. […] Analyses of death certificates […] support a case for a hazard that is reversible when exposure stops. […] In line with the mortality data, hospitalized pneumonia [has also] prove[n] to be more common among welders and other workers with exposure to metal fume than in workers from non-exposed jobs. Moreover, risks were confined to exposures in the previous 12 months […] Recently, inhalation experiments have confirmed that welding fume can promote bacterial growth in animals. […] A coherent body of evidence thus indicates that metal fume is a hazard for pneumonia. […] Presently, knowledge is lacking on the exposure-response relationship and what constitutes a ‘safe’ or ‘unsafe’ level or pattern of exposure to metal fume. […]  The pattern of epidemiological evidence […] is generally compatible with a hazard from iron in metal fume. Iron could promote infective risk in at least one of two ways: by acting as a growth nutrient for microorganisms, or as a cause of free radical injury. […] the Joint Committee on Vaccination and Immunisation, on behalf of the Department of Health in England, decided in November 2011 to recommend that ‘welders who have not received the pneumococcal polysaccharide vaccine (PPV23) previously should be offered a single dose of 0.5ml of PPV23 vaccine’ and that ‘employers should ensure that provision is in place for workers to receive PPV23’.”

December 2, 2017 Posted by | Books, Epidemiology, Infectious disease, Medicine | Leave a comment

Words

Most of these words are words which I encountered while reading the Jim Butcher books White Night, Small Favour, Turn Coat, and Changes.

Propitiate. Misericord. Skirling. Idiom. Cadge. Hapless. Roil. Kibble. Viridian. Kine. Shill. Steeple. Décolletage. Kukri. Rondure. Wee. Contrail. Servitor. Pastern. Fetlock.

Coterie. Crochet. Fibrillate. Knead. Divot. Avail. Tamale. Abalone. Cupola. Tuyere. Simulacrum. Bristle. Guff. Shimmy. Prow. Warble. Cannery. Twirl. Winch. Wheelhouse.

Teriyaki. Widdershins. Kibble. Slobber. Surcease. Amble. Invocation. Gasket. Chorale. Rivulet. Choker. Grimoire. Caduceus. Fussbudget. Pate. Scrunchie. Shamble. Ficus. Deposition. Grue.

Aliquot. Nape. Emanation. Atavistic. Menhir. Scrimshaw. Burble. Pauldron. Ornate. Stolid. Wry. Stamen. Ductwork. Speleothem. Philtrum. Hassock. Incipit. Planish. Rheology. Sinter.

 

November 29, 2017 Posted by | Books, Language | Leave a comment

Radioactivity

A few quotes from the book and some related links below. Here’s my very short goodreads review of the book.

Quotes:

“The main naturally occurring radionuclides of primordial origin are uranium-235, uranium-238, thorium-232, their decay products, and potassium-40. The average abundance of uranium, thorium, and potassium in the terrestrial crust is 2.6 parts per million, 10 parts per million, and 1% respectively. Uranium and thorium produce other radionuclides via neutron- and alpha-induced reactions, particularly deeply underground, where uranium and thorium have a high concentration. […] A weak source of natural radioactivity derives from nuclear reactions of primary and secondary cosmic rays with the atmosphere and the lithosphere, respectively. […] Accretion of extraterrestrial material, intensively exposed to cosmic rays in space, represents a minute contribution to the total inventory of radionuclides in the terrestrial environment. […] Natural radioactivity is [thus] mainly produced by uranium, thorium, and potassium. The total heat content of the Earth, which derives from this radioactivity, is 12.6 × 1024 MJ (one megajoule = 1 million joules), with the crust’s heat content standing at 5.4 × 1021 MJ. For comparison, this is significantly more than the 6.4 × 1013 MJ globally consumed for electricity generation during 2011. This energy is dissipated, either gradually or abruptly, towards the external layers of the planet, but only a small fraction can be utilized. The amount of energy available depends on the Earth’s geological dynamics, which regulates the transfer of heat to the surface of our planet. The total power dissipated by the Earth is 42 TW (one TW = 1 trillion watts): 8 TW from the crust, 32.3 TW from the mantle, 1.7 TW from the core. This amount of power is small compared to the 174,000 TW arriving to the Earth from the Sun.”

“Charged particles such as protons, beta and alpha particles, or heavier ions that bombard human tissue dissipate their energy locally, interacting with the atoms via the electromagnetic force. This interaction ejects electrons from the atoms, creating a track of electron–ion pairs, or ionization track. The energy that ions lose per unit path, as they move through matter, increases with the square of their charge and decreases linearly with their energy […] The energy deposited in the tissues and organs of your body by ionizing radiation is defined absorbed dose and is measured in gray. The dose of one gray corresponds to the energy of one joule deposited in one kilogram of tissue. The biological damage wrought by a given amount of energy deposited depends on the kind of ionizing radiation involved. The equivalent dose, measured in sievert, is the product of the dose and a factor w related to the effective damage induced into the living matter by the deposit of energy by specific rays or particles. For X-rays, gamma rays, and beta particles, a gray corresponds to a sievert; for neutrons, a dose of one gray corresponds to an equivalent dose of 5 to 20 sievert, and the factor w is equal to 5–20 (depending on the neutron energy). For protons and alpha particles, w is equal to 5 and 20, respectively. There is also another weighting factor taking into account the radiosensitivity of different organs and tissues of the body, to evaluate the so-called effective dose. Sometimes the dose is still quoted in rem, the old unit, with 100 rem corresponding to one sievert.”

“Neutrons emitted during fission reactions have a relatively high velocity. When still in Rome, Fermi had discovered that fast neutrons needed to be slowed down to increase the probability of their reaction with uranium. The fission reaction occurs with uranium-235. Uranium-238, the most common isotope of the element, merely absorbs the slow neutrons. Neutrons slow down when they are scattered by nuclei with a similar mass. The process is analogous to the interaction between two billiard balls in a head-on collision, in which the incoming ball stops and transfers all its kinetic energy to the second one. ‘Moderators’, such as graphite and water, can be used to slow neutrons down. […] When Fermi calculated whether a chain reaction could be sustained in a homogeneous mixture of uranium and graphite, he got a negative answer. That was because most neutrons produced by the fission of uranium-235 were absorbed by uranium-238 before inducing further fissions. The right approach, as suggested by Szilárd, was to use separated blocks of uranium and graphite. Fast neutrons produced by the splitting of uranium-235 in the uranium block would slow down, in the graphite block, and then produce fission again in the next uranium block. […] A minimum mass – the critical mass – is required to sustain the chain reaction; furthermore, the material must have a certain geometry. The fissile nuclides, capable of sustaining a chain reaction of nuclear fission with low-energy neutrons, are uranium-235 […], uranium-233, and plutonium-239. The last two don’t occur in nature but can be produced artificially by irradiating with neutrons thorium-232 and uranium-238, respectively – via a reaction called neutron capture. Uranium-238 (99.27%) is fissionable, but not fissile. In a nuclear weapon, the chain reaction occurs very rapidly, releasing the energy in a burst.”

“The basic components of nuclear power reactors, fuel, moderator, and control rods, are the same as in the first system built by Fermi, but the design of today’s reactors includes additional components such as a pressure vessel, containing the reactor core and the moderator, a containment vessel, and redundant and diverse safety systems. Recent technological advances in material developments, electronics, and information technology have further improved their reliability and performance. […] The moderator to slow down fast neutrons is sometimes still the graphite used by Fermi, but water, including ‘heavy water’ – in which the water molecule has a deuterium atom instead of a hydrogen atom – is more widely used. Control rods contain a neutron-absorbing material, such as boron or a combination of indium, silver, and cadmium. To remove the heat generated in the reactor core, a coolant – either a liquid or a gas – is circulating through the reactor core, transferring the heat to a heat exchanger or directly to a turbine. Water can be used as both coolant and moderator. In the case of boiling water reactors (BWRs), the steam is produced in the pressure vessel. In the case of pressurized water reactors (PWRs), the steam generator, which is the secondary side of the heat exchanger, uses the heat produced by the nuclear reactor to make steam for the turbines. The containment vessel is a one-metre-thick concrete and steel structure that shields the reactor.”

“Nuclear energy contributed 2,518 TWh of the world’s electricity in 2011, about 14% of the global supply. As of February 2012, there are 435 nuclear power plants operating in 31 countries worldwide, corresponding to a total installed capacity of 368,267 MW (electrical). There are 63 power plants under construction in 13 countries, with a capacity of 61,032 MW (electrical).”

“Since the first nuclear fusion, more than 60 years ago, many have argued that we need at least 30 years to develop a working fusion reactor, and this figure has stayed the same throughout those years.”

“[I]onizing radiation is […] used to improve many properties of food and other agricultural products. For example, gamma rays and electron beams are used to sterilize seeds, flour, and spices. They can also inhibit sprouting and destroy pathogenic bacteria in meat and fish, increasing the shelf life of food. […] More than 60 countries allow the irradiation of more than 50 kinds of foodstuffs, with 500,000 tons of food irradiated every year. About 200 cobalt-60 sources and more than 10 electron accelerators are dedicated to food irradiation worldwide. […] With the help of radiation, breeders can increase genetic diversity to make the selection process faster. The spontaneous mutation rate (number of mutations per gene, for each generation) is in the range 10-8–10-5. Radiation can increase this mutation rate to 10-5–10-2. […] Long-lived cosmogenic radionuclides provide unique methods to evaluate the ‘age’ of groundwaters, defined as the mean subsurface residence time after the isolation of the water from the atmosphere. […] Scientists can date groundwater more than a million years old, through chlorine-36, produced in the atmosphere by cosmic-ray reactions with argon.”

“Radionuclide imaging was developed in the 1950s using special systems to detect the emitted gamma rays. The gamma-ray detectors, called gamma cameras, use flat crystal planes, coupled to photomultiplier tubes, which send the digitized signals to a computer for image reconstruction. Images show the distribution of the radioactive tracer in the organs and tissues of interest. This method is based on the introduction of low-level radioactive chemicals into the body. […] More than 100 diagnostic tests based on radiopharmaceuticals are used to examine bones and organs such as lungs, intestines, thyroids, kidneys, the liver, and gallbladder. They exploit the fact that our organs preferentially absorb different chemical compounds. […] Many radiopharmaceuticals are based on technetium-99m (an excited state of technetium-99 – the ‘m’ stands for ‘metastable’ […]). This radionuclide is used for the imaging and functional examination of the heart, brain, thyroid, liver, and other organs. Technetium-99m is extracted from molybdenum-99, which has a much longer half-life and is therefore more transportable. It is used in 80% of the procedures, amounting to about 40,000 per day, carried out in nuclear medicine. Other radiopharmaceuticals include short-lived gamma-emitters such as cobalt-57, cobalt-58, gallium-67, indium-111, iodine-123, and thallium-201. […] Methods routinely used in medicine, such as X-ray radiography and CAT, are increasingly used in industrial applications, particularly in non-destructive testing of containers, pipes, and walls, to locate defects in welds and other critical parts of the structure.”

“Today, cancer treatment with radiation is generally based on the use of external radiation beams that can target the tumour in the body. Cancer cells are particularly sensitive to damage by ionizing radiation and their growth can be controlled or, in some cases, stopped. High-energy X-rays produced by a linear accelerator […] are used in most cancer therapy centres, replacing the gamma rays produced from cobalt-60. The LINAC produces photons of variable energy bombarding a target with a beam of electrons accelerated by microwaves. The beam of photons can be modified to conform to the shape of the tumour, which is irradiated from different angles. The main problem with X-rays and gamma rays is that the dose they deposit in the human tissue decreases exponentially with depth. A considerable fraction of the dose is delivered to the surrounding tissues before the radiation hits the tumour, increasing the risk of secondary tumours. Hence, deep-seated tumours must be bombarded from many directions to receive the right dose, while minimizing the unwanted dose to the healthy tissues. […] The problem of delivering the needed dose to a deep tumour with high precision can be solved using collimated beams of high-energy ions, such as protons and carbon. […] Contrary to X-rays and gamma rays, all ions of a given energy have a certain range, delivering most of the dose after they have slowed down, just before stopping. The ion energy can be tuned to deliver most of the dose to the tumour, minimizing the impact on healthy tissues. The ion beam, which does not broaden during the penetration, can follow the shape of the tumour with millimetre precision. Ions with higher atomic number, such as carbon, have a stronger biological effect on the tumour cells, so the dose can be reduced. Ion therapy facilities are [however] still very expensive – in the range of hundreds of millions of pounds – and difficult to operate.”

“About 50 million years ago, a global cooling trend took our planet from the tropical conditions at the beginning of the Tertiary to the ice ages of the Quaternary, when the Arctic ice cap developed. The temperature decrease was accompanied by a decrease in atmospheric CO2 from 2,000 to 300 parts per million. The cooling was probably caused by a reduced greenhouse effect and also by changes in ocean circulation due to plate tectonics. The drop in temperature was not constant as there were some brief periods of sudden warming. Ocean deep-water temperatures dropped from 12°C, 50 million years ago, to 6°C, 30 million years ago, according to archives in deep-sea sediments (today, deep-sea waters are about 2°C). […] During the last 2 million years, the mean duration of the glacial periods was about 26,000 years, while that of the warm periods – interglacials – was about 27,000 years. Between 2.6 and 1.1 million years ago, a full cycle of glacial advance and retreat lasted about 41,000 years. During the past 1.2 million years, this cycle has lasted 100,000 years. Stable and radioactive isotopes play a crucial role in the reconstruction of the climatic history of our planet”.

Links:

CUORE (Cryogenic Underground Observatory for Rare Events).
Borexino.
Lawrence Livermore National Laboratory.
Marie Curie. Pierre Curie. Henri Becquerel. Wilhelm Röntgen. Joseph Thomson. Ernest Rutherford. Hans Geiger. Ernest Marsden. Niels Bohr.
Ruhmkorff coil.
Electroscope.
Pitchblende (uraninite).
Mache.
Polonium. Becquerel.
Radium.
Alpha decay. Beta decay. Gamma radiation.
Plum pudding model.
Spinthariscope.
Robert Boyle. John Dalton. Dmitri Mendeleev. Frederick Soddy. James Chadwick. Enrico Fermi. Lise Meitner. Otto Frisch.
Periodic Table.
Exponential decay. Decay chain.
Positron.
Particle accelerator. Cockcroft-Walton generator. Van de Graaff generator.
Barn (unit).
Nuclear fission.
Manhattan Project.
Chernobyl disaster. Fukushima Daiichi nuclear disaster.
Electron volt.
Thermoluminescent dosimeter.
Silicon diode detector.
Enhanced geothermal system.
Chicago Pile Number 1. Experimental Breeder Reactor 1. Obninsk Nuclear Power Plant.
Natural nuclear fission reactor.
Gas-cooled reactor.
Generation I reactors. Generation II reactor. Generation III reactor. Generation IV reactor.
Nuclear fuel cycle.
Accelerator-driven subcritical reactor.
Thorium-based nuclear power.
Small, sealed, transportable, autonomous reactor.
Fusion power. P-p (proton-proton) chain reaction. CNO cycle. Tokamak. ITER (International Thermonuclear Experimental Reactor).
Sterile insect technique.
Phase-contrast X-ray imaging. Computed tomography (CT). SPECT (Single-photon emission computed tomography). PET (positron emission tomography).
Boron neutron capture therapy.
Radiocarbon dating. Bomb pulse.
Radioactive tracer.
Radithor. The Radiendocrinator.
Radioisotope heater unit. Radioisotope thermoelectric generator. Seebeck effect.
Accelerator mass spectrometry.
Atomic bombings of Hiroshima and Nagasaki. Treaty on the Non-Proliferation of Nuclear Weapons. IAEA.
Nuclear terrorism.
Swiss light source. Synchrotron.
Chronology of the universe. Stellar evolution. S-process. R-process. Red giant. Supernova. White dwarf.
Victor Hess. Domenico Pacini. Cosmic ray.
Allende meteorite.
Age of the Earth. History of Earth. Geomagnetic reversal. Uranium-lead dating. Clair Cameron Patterson.
Glacials and interglacials.
Taung child. Lucy. Ardi. Ardipithecus kadabba. Acheulean tools. Java Man. Ötzi.
Argon-argon dating. Fission track dating.

November 28, 2017 Posted by | Archaeology, Astronomy, Biology, Books, Cancer/oncology, Chemistry, Engineering, Geology, History, Medicine, Physics | Leave a comment

Isotopes

A decent book. Below some quotes and links.

“[A]ll mass spectrometers have three essential components — an ion source, a mass filter, and some sort of detector […] Mass spectrometers need to achieve high vacuum to allow the uninterrupted transmission of ions through the instrument. However, even high-vacuum systems contain residual gas molecules which can impede the passage of ions. Even at very high vacuum there will still be residual gas molecules in the vacuum system that present potential obstacles to the ion beam. Ions that collide with residual gas molecules lose energy and will appear at the detector at slightly lower mass than expected. This tailing to lower mass is minimized by improving the vacuum as much as possible, but it cannot be avoided entirely. The ability to resolve a small isotope peak adjacent to a large peak is called ‘abundance sensitivity’. A single magnetic sector TIMS has abundance sensitivity of about 1 ppm per mass unit at uranium masses. So, at mass 234, 1 ion in 1,000,000 will actually be 235U not 234U, and this will limit our ability to quantify the rare 234U isotope. […] AMS [accelerator mass spectrometry] instruments use very high voltages to achieve high abundance sensitivity. […] As I write this chapter, the human population of the world has recently exceeded seven billion. […] one carbon atom in 1012 is mass 14. So, detecting 14C is far more difficult than identifying a single person on Earth, and somewhat comparable to identifying an individual leaf in the Amazon rain forest. Such is the power of isotope ratio mass spectrometry.”

14C is produced in the Earth’s atmosphere by the interaction between nitrogen and cosmic ray neutrons that releases a free proton turning 147N into 146C in a process that we call an ‘n-p’ reaction […] Because the process is driven by cosmic ray bombardment, we call 14C a ‘cosmogenic’ isotope. The half-life of 14C is about 5,000 years, so we know that all the 14C on Earth is either cosmogenic or has been created by mankind through nuclear reactors and bombs — no ‘primordial’ 14C remains because any that originally existed has long since decayed. 14C is not the only cosmogenic isotope; 16O in the atmosphere interacts with cosmic radiation to produce the isotope 10Be (beryllium). […] The process by which a high energy cosmic ray particle removes several nucleons is called ‘spallation’. 10Be production from 16O is not restricted to the atmosphere but also occurs when cosmic rays impact rock surfaces. […] when cosmic rays hit a rock surface they don’t bounce off but penetrate the top 2 or 3 metres (m) — the actual ‘attenuation’ depth will vary for particles of different energy. Most of the Earth’s crust is made of silicate minerals based on bonds between oxygen and silicon. So, the same spallation process that produces 10Be in the atmosphere also occurs in rock surfaces. […] If we know the flux of cosmic rays impacting a surface, the rate of production of the cosmogenic isotopes with depth below the rock surface, and the rate of radioactive decay, it should be possible to convert the number of cosmogenic atoms into an exposure age. […] Rocks on Earth which are shielded from much of the cosmic radiation have much lower levels of isotopes like 10Be than have meteorites which, before they arrive on Earth, are exposed to the full force of cosmic radiation. […] polar scientists have used cores drilled through ice sheets in Antarctica and Greenland to compare 10Be at different depths and thereby reconstruct 10Be production through time. The 14C and 10Be records are closely correlated indicating the common response to changes in the cosmic ray flux.”

“[O]nce we have credible cosmogenic isotope production rates, […] there are two classes of applications, which we can call ‘exposure’ and ‘burial’ methodologies. Exposure studies simply measure the accumulation of the cosmogenic nuclide. Such studies are simplest when the cosmogenic nuclide is a stable isotope like 3He and 21Ne. These will just accumulate continuously as the sample is exposed to cosmic radiation. Slightly more complicated are cosmogenic isotopes that are radioactive […]. These isotopes accumulate through exposure but will also be destroyed by radioactive decay. Eventually, the isotopes achieve the condition known as ‘secular equilibrium’ where production and decay are balanced and no chronological information can be extracted. Secular equilibrium is achieved after three to four half-lives […] Imagine a boulder that has been transported from its place of origin to another place within a glacier — what we call a glacial erratic. While the boulder was deeply covered in ice, it would not have been exposed to cosmic radiation. Its cosmogenic isotopes will only have accumulated since the ice melted. So a cosmogenic isotope exposure age tells us the date at which the glacier retreated, and, by examining multiple erratics from different locations along the course of the glacier, allows us to construct a retreat history for the de-glaciation. […] Burial methodologies using cosmogenic isotopes work in situations where a rock was previously exposed to cosmic rays but is now located in a situation where it is shielded.”

“Cosmogenic isotopes are also being used extensively to recreate the seismic histories of tectonically active areas. Earthquakes occur when geological faults give way and rock masses move. A major earthquake is likely to expose new rock to the Earth’s surface. If the field geologist can identify rocks in a fault zone that (s)he is confident were brought to the surface in an earthquake, then a cosmogenic isotope exposure age would date the fault — providing, of course, that subsequent erosion can be ruled out or quantified. Precarious rocks are rock outcrops that could reasonably be expected to topple if subjected to a significant earthquake. Dating the exposed surface of precarious rocks with cosmogenic isotopes can reveal the amount of time that has elapsed since the last earthquake of a magnitude that would have toppled the rock. Constructing records of seismic history is not merely of academic interest; some of the world’s seismically active areas are also highly populated and developed.”

“One aspect of the natural decay series that acts in favour of the preservation of accurate age information is the fact that most of the intermediate isotopes are short-lived. For example, in both the U series the radon (Rn) isotopes, which might be expected to diffuse readily out of a mineral, have half-lives of only seconds or days, too short to allow significant losses. Some decay series isotopes though do have significantly long half-lives which offer the potential to be geochronometers in their own right. […] These techniques depend on the tendency of natural decay series to evolve towards a state of ‘secular equilibrium’ in which the activity of all species in the decay series is equal. […] at secular equilibrium, isotopes with long half-lives (i.e. small decay constants) will have large numbers of atoms whereas short-lived isotopes (high decay constants) will only constitute a relatively small number of atoms. Since decay constants vary by several orders of magnitude, so will the numbers of atoms of each isotope in the equilibrium decay series. […] Geochronological applications of natural decay series depend upon some process disrupting the natural decay series to introduce either a deficiency or an excess of an isotope in the series. The decay series will then gradually return to secular equilibrium and the geochronometer relies on measuring the extent to which equilibrium has been approached.”

“The ‘ring of fire’ volcanoes around the margin of the Pacific Ocean are a manifestation of subduction in which the oldest parts of the Pacific Ocean crust are being returned to the mantle below. The oldest parts of the Pacific Ocean crust are about 150 million years (Ma) old, with anything older having already disappeared into the mantle via subduction zones. The Atlantic Ocean doesn’t have a ring of fire because it is a relatively young ocean which started to form about 60 Ma ago, and its oldest rocks are not yet ready to form subduction zones. Thus, while continental crust persists for billions of years, oceanic crust is a relatively transient (in terms of geological time) phenomenon at the Earth’s surface.”

“Mantle rocks typically contain minerals such as olivine, pyroxene, spinel, and garnet. Unlike say ice, which melts to form water, mixtures of minerals do not melt in the proportions in which they occur in the rock. Rather, they undergo partial melting in which some minerals […] melt preferentially leaving a solid residue enriched in refractory minerals […]. We know this from experimentally melting mantle-like rocks in the laboratory, but also because the basalts produced by melting of the mantle are closer in composition to Ca-rich (clino-) pyroxene than to the olivine-rich rocks that dominate the solid pieces (or xenoliths) of mantle that are sometimes transferred to the surface by certain types of volcanic eruptions. […] Thirty years ago geologists fiercely debated whether the mantle was homogeneous or heterogeneous; mantle isotope geochemistry hasn’t yet elucidated all the details but it has put to rest the initial conundrum; Earth’s mantle is compositionally heterogeneous.”

Links:

Frederick Soddy.
Rutherford–Bohr model.
Isotopes of hydrogen.
Radioactive decay. Types of decay. Alpha decay. Beta decay. Electron capture decay. Branching fraction. Gamma radiation. Spontaneous fission.
Promethium.
Lanthanides.
Radiocarbon dating.
Hessel de Vries.
Dendrochronology.
Suess effect.
Bomb pulse.
Delta notation (non-wiki link).
Isotopic fractionation.
C3 carbon fixation. C4 carbon fixation.
Nitrogen-15 tracing.
Isotopes of strontium. Strontium isotope analysis.
Ötzi.
Mass spectrometry.
Geiger counter.
Townsend avalanche.
Gas proportional counter.
Scintillation detector.
Liquid scintillation spectometry. Photomultiplier tube.
Dynode.
Thallium-doped sodium iodide detectors. Semiconductor-based detectors.
Isotope separation (-enrichment).
Doubly labeled water.
Urea breath test.
Radiation oncology.
Brachytherapy.
Targeted radionuclide therapy.
Iodine-131.
MIBG scan.
Single-photon emission computed tomography.
Positron emission tomography.
Inductively coupled plasma (ICP) mass spectrometry.
Secondary ion mass spectrometry.
Faraday cup (-detector).
δ18O.
Stadials and interstadials. Oxygen isotope ratio cycle.
Insolation.
Gain and phase model.
Milankovitch cycles.
Perihelion and aphelion. Precession.
Equilibrium Clumped-Isotope Effects in Doubly Substituted Isotopologues of Ethane (non-wiki link).
Age of the Earth.
Uranium–lead dating.
Geochronology.
Cretaceous–Paleogene boundary.
Argon-argon dating.
Nuclear chain reaction. Critical mass.
Fukushima Daiichi nuclear disaster.
Natural nuclear fission reactor.
Continental crust. Oceanic crust. Basalt.
Core–mantle boundary.
Chondrite.
Ocean Island Basalt.
Isochron dating.

November 23, 2017 Posted by | Biology, Books, Botany, Chemistry, Geology, Medicine, Physics | Leave a comment

Materials… (II)

Some more quotes and links:

“Whether materials are stiff and strong, or hard or weak, is the territory of mechanics. […] the 19th century continuum theory of linear elasticity is still the basis of much of modern solid mechanics. A stiff material is one which does not deform much when a force acts on it. Stiffness is quite distinct from strength. A material may be stiff but weak, like a piece of dry spaghetti. If you pull it, it stretches only slightly […], but as you ramp up the force it soon breaks. To put this on a more scientific footing, so that we can compare different materials, we might devise a test in which we apply a force to stretch a bar of material and measure the increase in length. The fractional change in length is the strain; and the applied force divided by the cross-sectional area of the bar is the stress. To check that it is Hookean, we double the force and confirm that the strain has also doubled. To check that it is truly elastic, we remove the force and check that the bar returns to the same length that it started with. […] then we calculate the ratio of the stress to the strain. This ratio is the Young’s modulus of the material, a quantity which measures its stiffness. […] While we are measuring the change in length of the bar, we might also see if there is a change in its width. It is not unreasonable to think that as the bar stretches it also becomes narrower. The Poisson’s ratio of the material is defined as the ratio of the transverse strain to the longitudinal strain (without the minus sign).”

“There was much argument between Cauchy and Lamé and others about whether there are two stiffness moduli or one. […] In fact, there are two stiffness moduli. One describes the resistance of a material to shearing and the other to compression. The shear modulus is the stiffness in distortion, for example in twisting. It captures the resistance of a material to changes of shape, with no accompanying change of volume. The compression modulus (usually called the bulk modulus) expresses the resistance to changes of volume (but not shape). This is what occurs as a cube of material is lowered deep into the sea, and is squeezed on all faces by the water pressure. The Young’s modulus [is] a combination of the more fundamental shear and bulk moduli, since stretching in one direction produces changes in both shape and volume. […] A factor of about 10,000 covers the useful range of Young’s modulus in engineering materials. The stiffness can be traced back to the forces acting between atoms and molecules in the solid state […]. Materials like diamond or tungsten with strong bonds are stiff in the bulk, while polymer materials with weak intermolecular forces have low stiffness.”

“In pure compression, the concept of ‘strength’ has no meaning, since the material cannot fail or rupture. But materials can and do fail in tension or in shear. To judge how strong a material is we can go back for example to the simple tension arrangement we used for measuring stiffness, but this time make it into a torture test in which the specimen is put on the rack. […] We find […] that we reach a strain at which the material stops being elastic and is permanently stretched. We have reached the yield point, and beyond this we have damaged the material but it has not failed. After further yielding, the bar may fail by fracture […]. On the other hand, with a bar of cast iron, there comes a point where the bar breaks, noisily and without warning, and without yield. This is a failure by brittle fracture. The stress at which it breaks is the tensile strength of the material. For the ductile material, the stress at which plastic deformation starts is the tensile yield stress. Both are measures of strength. It is in metals that yield and plasticity are of the greatest significance and value. In working components, yield provides a safety margin between small-strain elasticity and catastrophic rupture. […] plastic deformation is [also] exploited in making things from metals like steel and aluminium. […] A useful feature of plastic deformation in metals is that plastic straining raises the yield stress, particularly at lower temperatures.”

“Brittle failure is not only noisy but often scary. Engineers keep well away from it. An elaborate theory of fracture mechanics has been built up to help them avoid it, and there are tough materials to hand which do not easily crack. […] Since small cracks and flaws are present in almost any engineering component […], the trick is not to avoid cracks but to avoid long cracks which exceed [a] critical length. […] In materials which can yield, the tip stress can be relieved by plastic deformation, and this is a potent toughening mechanism in some materials. […] The trick of compressing a material to suppress cracking is a powerful way to toughen materials.”

“Hardness is a property which materials scientists think of in a particular and practical way. It tells us how well a material resists being damaged or deformed by a sharp object. That is useful information and it can be obtained easily. […] Soft is sometimes the opposite of hard […] But a different kind of soft is squidgy. […] In the soft box, we find many everyday materials […]. Some soft materials such as adhesives and lubricants are of great importance in engineering. For all of them, the model of a stiff crystal lattice provides no guidance. There is usually no crystal. The units are polymer chains, or small droplets of liquids, or small solid particles, with weak forces acting between them, and little structural organization. Structures when they exist are fragile. Soft materials deform easily when forces act on them […]. They sit as a rule somewhere between rigid solids and simple liquids. Their mechanical behaviour is dominated by various kinds of plasticity.”

“In pure metals, the resistivity is extremely low […] and a factor of ten covers all of them. […] the low resistivity (or, put another way, the high conductivity) arises from the existence of a conduction band in the solid which is only partly filled. Electrons in the conduction band are mobile and drift in an applied electric field. This is the electric current. The electrons are subject to some scattering from lattice vibrations which impedes their motion and generates an intrinsic resistance. Scattering becomes more severe as the temperature rises and the amplitude of the lattice vibrations becomes greater, so that the resistivity of metals increases with temperature. Scattering is further increased by microstructural heterogeneities, such as grain boundaries, lattice distortions, and other defects, and by phases of different composition. So alloys have appreciably higher resistivities than their pure parent metals. Adding 5 per cent nickel to iron doubles the resistivity, although the resistivities of the two pure metals are similar. […] Resistivity depends fundamentally on band structure. […] Plastics and rubbers […] are usually insulators. […] Electronically conducting plastics would have many uses, and some materials [e.g. this one] are now known. […] The electrical resistivity of many metals falls to exactly zero as they are cooled to very low temperatures. The critical temperature at which this happens varies, but for pure metallic elements it always lies below 10 K. For a few alloys, it is a little higher. […] Superconducting windings provide stable and powerful magnetic fields for magnetic resonance imaging, and many industrial and scientific uses.”

“A permanent magnet requires no power. Its magnetization has its origin in the motion of electrons in atoms and ions in the solid, but only a few materials have the favourable combination of quantum properties to give rise to useful ferromagnetism. […] Ferromagnetism disappears completely above the so-called Curie temperature. […] Below the Curie temperature, ferromagnetic alignment throughout the material can be established by imposing an external polarizing field to create a net magnetization. In this way a practical permanent magnet is made. The ideal permanent magnet has an intense magnetization (a strong field) which remains after the polarizing field is switched off. It can only be demagnetized by applying a strong polarizing field in the opposite direction: the size of this field is the coercivity of the magnet material. For a permanent magnet, it should be as high as possible. […] Permanent magnets are ubiquitous but more or less invisible components of umpteen devices. There are a hundred or so in every home […]. There are also important uses for ‘soft’ magnetic materials, in devices where we want the ferromagnetism to be temporary, not permanent. Soft magnets lose their magnetization after the polarizing field is removed […] They have low coercivity, approaching zero. When used in a transformer, such a soft ferromagnetic material links the input and output coils by magnetic induction. Ideally, the magnetization should reverse during every cycle of the alternating current to minimize energy losses and heating. […] Silicon transformer steels yielded large gains in efficiency in electrical power distribution when they were first introduced in the 1920s, and they remain pre-eminent.”

“At least 50 families of plastics are produced commercially today. […] These materials all consist of linear string molecules, most with simple carbon backbones, a few with carbon-oxygen backbones […] Plastics as a group are valuable because they are lightweight and work well in wet environments, and don’t go rusty. They are mostly unaffected by acids and salts. But they burn, and they don’t much like sunlight as the ultraviolet light can break the polymer backbone. Most commercial plastics are mixed with substances which make it harder for them to catch fire and which filter out the ultraviolet light. Above all, plastics are used because they can be formed and shaped so easily. The string molecule itself is held together by strong chemical bonds and is resilient, but the forces between the molecules are weak. So plastics melt at low temperatures to produce rather viscous liquids […]. And with modest heat and a little pressure, they can be injected into moulds to produce articles of almost any shape”.

“The downward cascade of high purity to adulterated materials in recycling is a kind of entropy effect: unmixing is thermodynamically hard work. But there is an energy-driven problem too. Most materials are thermodynamically unstable (or metastable) in their working environments and tend to revert to the substances from which they were made. This is well-known in the case of metals, and is the usual meaning of corrosion. The metals are more stable when combined with oxygen than uncombined. […] Broadly speaking, ceramic materials are more stable thermodynamically, since they already contain much oxygen in chemical combination. Even so, ceramics used in the open usually fall victim to some environmental predator. Often it is water that causes damage. Water steals sodium and potassium from glass surfaces by slow leaching. The surface shrinks and cracks, so the glass loses its transparency. […] Stones and bricks may succumb to the stresses of repeated freezing when wet; limestones decay also by the chemical action of sulfur and nitrogen gasses in polluted rainwater. Even buried archaeological pots slowly react with water in a remorseless process similar to that of rock weathering.”

Ashby plot.
Alan Arnold Griffith.
Creep (deformation).
Amontons’ laws of friction.
Viscoelasticity.
Internal friction.
Surfactant.
Dispersant.
Rheology.
Liquid helium.
Conductor. Insulator. Semiconductor. P-type -ll-. N-type -ll-.
Hall–Héroult process.
Cuprate.
Magnetostriction.
Snell’s law.
Chromatic aberration.
Dispersion (optics).
Dye.
Density functional theory.
Glass.
Pilkington float process.
Superalloy.
Ziegler–Natta catalyst.
Transistor.
Integrated circuit.
Negative-index metamaterial.
Auxetics.
Titanium dioxide.
Hyperfine structure (/-interactions).
Diamond anvil cell.
Synthetic rubber.
Simon–Ehrlich wager.
Sankey diagram.

November 16, 2017 Posted by | Books, Chemistry, Engineering, Physics | Leave a comment

Materials (I)…

Useful matter is a good definition of materials. […] Materials are materials because inventive people find ingenious things to do with them. Or just because people use them. […] Materials science […] explains how materials are made and how they behave as we use them.”

I recently read this book, which I liked. Below I have added some quotes from the first half of the book, with some added hopefully helpful links, as well as a collection of links at the bottom of the post to other topics covered.

“We understand all materials by knowing about composition and microstructure. Despite their extraordinary minuteness, the atoms are the fundamental units, and they are real, with precise attributes, not least size. Solid materials tend towards crystallinity (for the good thermodynamic reason that it is the arrangement of lowest energy), and they usually achieve it, though often in granular, polycrystalline forms. Processing conditions greatly influence microstructures which may be mobile and dynamic, particularly at high temperatures. […] The idea that we can understand materials by looking at their internal structure in finer and finer detail goes back to the beginnings of microscopy […]. This microstructural view is more than just an important idea, it is the explanatory framework at the core of materials science. Many other concepts and theories exist in materials science, but this is the framework. It says that materials are intricately constructed on many length-scales, and if we don’t understand the internal structure we shall struggle to explain or to predict material behaviour.”

“Oxygen is the most abundant element in the earth’s crust and silicon the second. In nature, silicon occurs always in chemical combination with oxygen, the two forming the strong Si–O chemical bond. The simplest combination, involving no other elements, is silica; and most grains of sand are crystals of silica in the form known as quartz. […] The quartz crystal comes in right- and left-handed forms. Nothing like this happens in metals but arises frequently when materials are built from molecules and chemical bonds. The crystal structure of quartz has to incorporate two different atoms, silicon and oxygen, each in a repeating pattern and in the precise ratio 1:2. There is also the severe constraint imposed by the Si–O chemical bonds which require that each Si atom has four O neighbours arranged around it at the corners of a tetrahedron, every O bonded to two Si atoms. The crystal structure which quartz adopts (which of all possibilities is the one of lowest energy) is made up of triangular and hexagonal units. But within this there are buried helixes of Si and O atoms, and a helix must be either right- or left-handed. Once a quartz crystal starts to grow as right- or left-handed, its structure templates all the other helices with the same handedness. Equal numbers of right- and left-handed crystals occur in nature, but each is unambiguously one or the other.”

“In the living tree, and in the harvested wood that we use as a material, there is a hierarchy of structural levels, climbing all the way from the molecular to the scale of branch and trunk. The stiff cellulose chains are bundled into fibrils, which are themselves bonded by other organic molecules to build the walls of cells; which in turn form channels for the transport of water and nutrients, the whole having the necessary mechanical properties to support its weight and to resist the loads of wind and rain. In the living tree, the structure allows also for growth and repair. There are many things to be learned from biological materials, but the most universal is that biology builds its materials at many structural levels, and rarely makes a distinction between the material and the organism. Being able to build materials with hierarchical architectures is still more or less out of reach in materials engineering. Understanding how materials spontaneously self-assemble is the biggest challenge in contemporary nanotechnology.”

“The example of diamond shows two things about crystalline materials. First, anything we know about an atom and its immediate environment (neighbours, distances, angles) holds for every similar atom throughout a piece of material, however large; and second, everything we know about the unit cell (its size, its shape, and its symmetry) also applies throughout an entire crystal […] and by extension throughout a material made of a myriad of randomly oriented crystallites. These two general propositions provide the basis and justification for lattice theories of material behaviour which were developed from the 1920s onwards. We know that every solid material must be held together by internal cohesive forces. If it were not, it would fly apart and turn into a gas. A simple lattice theory says that if we can work out what forces act on the atoms in one unit cell, then this should be enough to understand the cohesion of the entire crystal. […] In lattice models which describe the cohesion and dynamics of the atoms, the role of the electrons is mainly in determining the interatomic bonding and the stiffness of the bond-spring. But in many materials, and especially in metals and semiconductors, some of the electrons are free to move about within the lattice. A lattice model of electron behaviour combines a geometrical description of the lattice with a more or less mechanical view of the atomic cores, and a fully quantum theoretical description of the electrons themselves. We need only to take account of the outer electrons of the atoms, as the inner electrons are bound tightly into the cores and are not itinerant. The outer electrons are the ones that form chemical bonds, so they are also called the valence electrons.”

“It is harder to push atoms closer together than to pull them further apart. While atoms are soft on the outside, they have harder cores, and pushed together the cores start to collide. […] when we bring a trillion atoms together to form a crystal, it is the valence electrons that are disturbed as the atoms approach each other. As the atomic cores come close to the equilibrium spacing of the crystal, the electron states of the isolated atoms morph into a set of collective states […]. These collective electron states have a continuous distribution of energies up to a top level, and form a ‘band’. But the separation of the valence electrons into distinct electron-pair states is preserved in the band structure, so that we find that the collective states available to the entire population of valence electrons in the entire crystal form a set of bands […]. Thus in silicon, there are two main bands.”

“The perfect crystal has atoms occupying all the positions prescribed by the geometry of its crystal lattice. But real crystalline materials fall short of perfection […] For instance, an individual site may be unoccupied (a vacancy). Or an extra atom may be squeezed into the crystal at a position which is not a lattice position (an interstitial). An atom may fall off its lattice site, creating a vacancy and an interstitial at the same time. Sometimes a site is occupied by the wrong kind of atom. Point defects of this kind distort the crystal in their immediate neighbourhood. Vacancies free up diffusional movement, allowing atoms to hop from site to site. Larger scale defects invariably exist too. A complete layer of atoms or unit cells may terminate abruptly within the crystal to produce a line defect (a dislocation). […] There are materials which try their best to crystallize, but find it hard to do so. Many polymer materials are like this. […] The best they can do is to form small crystalline regions in which the molecules lie side by side over limited distances. […] Often the crystalline domains comprise about half the material: it is a semicrystal. […] Crystals can be formed from the melt, from solution, and from the vapour. All three routes are used in industry and in the laboratory. As a rule, crystals that grow slowly are good crystals. Geological time can give wonderful results. Often, crystals are grown on a seed, a small crystal of the same material deliberately introduced into the crystallization medium. If this is a melt, the seed can gradually be pulled out, drawing behind it a long column of new crystal material. This is the Czochralski process, an important method for making semiconductors. […] However it is done, crystals invariably grow by adding material to the surface of a small particle to make it bigger.”

“As we go down the Periodic Table of elements, the atoms get heavier much more quickly than they get bigger. The mass of a single atom of uranium at the bottom of the Table is about 25 times greater than that of an atom of the lightest engineering metal, beryllium, at the top, but its radius is only 40 per cent greater. […] The density of solid materials of every kind is fixed mainly by where the constituent atoms are in the Periodic Table. The packing arrangement in the solid has only a small influence, although the crystalline form of a substance is usually a little denser than the amorphous form […] The range of solid densities available is therefore quite limited. At the upper end we hit an absolute barrier, with nothing denser than osmium (22,590 kg/m3). At the lower end we have some slack, as we can make lighter materials by the trick of incorporating holes to make foams and sponges and porous materials of all kinds. […] in the entire catalogue of available materials there is a factor of about a thousand for ingenious people to play with, from say 20 to 20,000 kg/m3.”

“The expansion of materials as we increase their temperature is a universal tendency. It occurs because as we raise the temperature the thermal energy of the atoms and molecules increases correspondingly, and this fights against the cohesive forces of attraction. The mean distance of separation between atoms in the solid (or the liquid) becomes larger. […] As a general rule, the materials with small thermal expansivities are metals and ceramics with high melting temperatures. […] Although thermal expansion is a smooth process which continues from the lowest temperatures to the melting point, it is sometimes interrupted by sudden jumps […]. Changes in crystal structure at precise temperatures are commonplace in materials of all kinds. […] There is a cluster of properties which describe the thermal behaviour of materials. Besides the expansivity, there is the specific heat, and also the thermal conductivity. These properties show us, for example, that it takes about four times as much energy to increase the temperature of 1 kilogram of aluminium by 1°C as 1 kilogram of silver; and that good conductors of heat are usually also good conductors of electricity. At everyday temperatures there is not a huge difference in specific heat between materials. […] In all crystalline materials, thermal conduction arises from the diffusion of phonons from hot to cold regions. As they travel, the phonons are subject to scattering both by collisions with other phonons, and with defects in the material. This picture explains why the thermal conductivity falls as temperature rises”.

 

Materials science.
Metals.
Inorganic compound.
Organic compound.
Solid solution.
Copper. Bronze. Brass. Alloy.
Electrical conductivity.
Steel. Bessemer converter. Gamma iron. Alpha iron. Cementite. Martensite.
Phase diagram.
Equation of state.
Calcite. Limestone.
Birefringence.
Portland cement.
Cellulose.
Wood.
Ceramic.
Mineralogy.
Crystallography.
Laue diffraction pattern.
Silver bromide. Latent image. Photographic film. Henry Fox Talbot.
Graphene. Graphite.
Thermal expansion.
Invar.
Dulong–Petit law.
Wiedemann–Franz law.

 

November 14, 2017 Posted by | Biology, Books, Chemistry, Engineering, Physics | Leave a comment

Common Errors in Statistics… (III)

This will be my last post about the book. I liked most of it, and I gave it four stars on goodreads, but that doesn’t mean there weren’t any observations included in the book with which I took issue/disagreed. Here’s one of the things I didn’t like:

“In the univariate [model selection] case, if the errors were not normally distributed, we could take advantage of permutation methods to obtain exact significance levels in tests of the coefficients. Exact permutation methods do not exist in the multivariable case.

When selecting variables to incorporate in a multivariable model, we are forced to perform repeated tests of hypotheses, so that the resultant p-values are no longer meaningful. One solution, if sufficient data are available, is to divide the dataset into two parts, using the first part to select variables, and the second part to test these same variables for significance.” (chapter 13)

The basic idea is to use the results of hypothesis tests to decide which variables to include in the model. This is both common- and bad practice. I found it surprising that such a piece of advice would be included in this book, as I’d figured beforehand that this would precisely be the sort of thing a book like this one would tell people not to do. I’ve said this before multiple times on this blog, but I’ll keep saying it, especially if/when I find this sort of advice in statistics textbooks: Using hypothesis testing as a basis for model selection is an invalid approach to model selection, and it’s in general a terrible idea. “There is no statistical theory that supports the notion that hypothesis testing with a fixed α level is a basis for model selection.” (Burnham & Anderson). Use information criteria, not hypothesis tests, to make your model selection decisions. (And read Burnham & Anderson’s book on these topics.)

Anyway, much of the stuff included in the book was good stuff and it’s a very decent book. I’ve added some quotes and observations from the last part of the book below.

“OLS is not the only modeling technique. To diminish the effect of outliers, and treat prediction errors as proportional to their absolute magnitude rather than their squares, one should use least absolute deviation (LAD) regression. This would be the case if the conditional distribution of the dependent variable were characterized by a distribution with heavy tails (compared to the normal distribution, increased probability of values far from the mean). One should also employ LAD regression when the conditional distribution of the dependent variable given the predictors is not symmetric and we wish to estimate its median rather than its mean value.
If it is not clear which variable should be viewed as the predictor and which the dependent variable, as is the case when evaluating two methods of measurement, then one should employ Deming or error in variable (EIV) regression.
If one’s primary interest is not in the expected value of the dependent variable but in its extremes (the number of bacteria that will survive treatment or the number of individuals who will fall below the poverty line), then one ought consider the use of quantile regression.
If distinct strata exist, one should consider developing separate regression models for each stratum, a technique known as ecological regression [] If one’s interest is in classification or if the majority of one’s predictors are dichotomous, then one should consider the use of classification and regression trees (CART) […] If the outcomes are limited to success or failure, one ought employ logistic regression. If the outcomes are counts rather than continuous measurements, one should employ a generalized linear model (GLM).”

“Linear regression is a much misunderstood and mistaught concept. If a linear model provides a good fit to data, this does not imply that a plot of the dependent variable with respect to the predictor would be a straight line, only that a plot of the dependent variable with respect to some not-necessarily monotonic function of the predictor would be a line. For example, y = A + B log[x] and y = A cos(x) + B sin(x) are both linear models whose coefficients A and B might be derived by OLS or LAD methods. Y = Ax5 is a linear model. Y = xA is nonlinear. […] Perfect correlation (ρ2 = 1) does not imply that two variables are identical but rather that one of them, Y, say, can be written as a linear function of the other, Y = a + bX, where b is the slope of the regression line and a is the intercept. […] Nonlinear regression methods are appropriate when the form of the nonlinear model is known in advance. For example, a typical pharmacological model will have the form A exp[bX] + C exp[dW]. The presence of numerous locally optimal but globally suboptimal solutions creates challenges, and validation is essential. […] To be avoided are a recent spate of proprietary algorithms available solely in software form that guarantee to find a best-fitting solution. In the words of John von Neumann, “With four parameters I can fit an elephant and with five I can make him wiggle his trunk.””

“[T]he most common errors associated with quantile regression include: 1. Failing to evaluate whether the model form is appropriate, for example, forcing linear fit through an obvious nonlinear response. (Of course, this is also a concern with mean regression, OLS, LAD, or EIV.) 2. Trying to over interpret a single quantile estimate (say 0.85) with a statistically significant nonzero slope (p < 0.05) when the majority of adjacent quantiles (say 0.5 − 0.84 and 0.86 − 0.95) are clearly zero (p > 0.20). 3. Failing to use all the information a quantile regression provides. Even if you think you are only interested in relations near maximum (say 0.90 − 0.99), your understanding will be enhanced by having estimates (and sampling variation via confidence intervals) across a wide range of quantiles (say 0.01 − 0.99).”

“Survival analysis is used to assess time-to-event data including time to recovery and time to revision. Most contemporary survival analysis is built around the Cox model […] Possible sources of error in the application of this model include all of the following: *Neglecting the possible dependence of the baseline function λ0 on the predictors. *Overmatching, that is, using highly correlated predictors that may well mask each other’s effects. *Using the parametric Breslow or Kaplan–Meier estimators of the survival function rather than the nonparametric Nelson–Aalen estimator. *Excluding patients based on post-hoc criteria. Pathology workups on patients who died during the study may reveal that some of them were wrongly diagnosed. Regardless, patients cannot be eliminated from the study as we lack the information needed to exclude those who might have been similarly diagnosed but who are still alive at the conclusion of the study. *Failure to account for differential susceptibility (frailty) of the patients”.

“In reporting the results of your modeling efforts, you need to be explicit about the methods used, the assumptions made, the limitations on your model’s range of application, potential sources of bias, and the method of validation […] Multivariable regression is plagued by the same problems univariate regression is heir to, plus many more of its own. […] If choosing the correct functional form of a model in a univariate case presents difficulties, consider that in the case of k variables, there are k linear terms (should we use logarithms? should we add polynomial terms?) and k(k − 1) first-order cross products of the form xixk. Should we include any of the k(k − 1)(k − 2) second-order cross products? A common error is to attribute the strength of a relationship to the magnitude of the predictor’s regression coefficient […] Just scale the units in which the predictor is reported to see how erroneous such an assumption is. […] One of the main problems in multiple regression is multicollinearity, which is the correlation among predictors. Even relatively weak levels of multicollinearity are enough to generate instability in multiple regression models […]. A simple solution is to evaluate the correlation matrix M among predictors, and use this matrix to choose the predictors that are less correlated. […] Test M for each predictor, using the variance inflation factor (VIF) given by (1 − R2) − 1, where R2 is the multiple coefficient of determination of the predictor against all other predictors. If VIF is large for a given predictor (>8, say) delete this predictor and reestimate the model. […] Dropping collinear variables from the analysis can result in a substantial loss of power”.

“It can be difficult to predict the equilibrium point for a supply-and-demand model, because producers change their price in response to demand and consumers change their demand in response to price. Failing to account for endogeneous variables can lead to biased estimates of the regression coefficients.
Endogeneity can arise not only as a result of omitted variables, but of measurement error, autocorrelated errors, simultaneity, and sample selection errors. One solution is to make use of instrument variables that should satisfy two conditions: 1. They should be correlated with the endogenous explanatory variables, conditional on the other covariates. 2. They should not be correlated with the error term in the explanatory equation, that is, they should not suffer from the same problem as the original predictor.
Instrumental variables are commonly used to estimate causal effects in contexts in which controlled experiments are not possible, for example in estimating the effects of past and projected government policies.”

“[T]he following errors are frequently associated with factor analysis: *Applying it to datasets with too few cases in relation to the number of variables analyzed […], without noticing that correlation coefficients have very wide confidence intervals in small samples. *Using oblique rotation to get a number of factors bigger or smaller than the number of factors obtained in the initial extraction by principal components, as a way to show the validity of a questionnaire. For example, obtaining only one factor by principal components and using the oblique rotation to justify that there were two differentiated factors, even when the two factors were correlated and the variance explained by the second factor was very small. *Confusion among the total variance explained by a factor and the variance explained in the reduced factorial space. In this way a researcher interpreted that a given group of factors explaining 70% of the variance before rotation could explain 100% of the variance after rotation.”

“Poisson regression is appropriate when the dependent variable is a count, as is the case with the arrival of individuals in an emergency room. It is also applicable to the spatial distributions of tornadoes and of clusters of galaxies.2 To be applicable, the events underlying the outcomes must be independent […] A strong assumption of the Poisson regression model is that the mean and variance are equal (equidispersion). When the variance of a sample exceeds the mean, the data are said to be overdispersed. Fitting the Poisson model to overdispersed data can lead to misinterpretation of coefficients due to poor estimates of standard errors. Naturally occurring count data are often overdispersed due to correlated errors in time or space, or other forms of nonindependence of the observations. One solution is to fit a Poisson model as if the data satisfy the assumptions, but adjust the model-based standard errors usually employed. Another solution is to estimate a negative binomial model, which allows for scalar overdispersion.”

“When multiple observations are collected for each principal sampling unit, we refer to the collected information as panel data, correlated data, or repeated measures. […] The dependency of observations violates one of the tenets of regression analysis: that observations are supposed to be independent and identically distributed or IID. Several concerns arise when observations are not independent. First, the effective number of observations (that is, the effective amount of information) is less than the physical number of observations […]. Second, any model that fails to specifically address [the] correlation is incorrect […]. Third, although the correct specification of the correlation will yield the most efficient estimator, that specification is not the only one to yield a consistent estimator.”

“The basic issue in deciding whether to utilize a fixed- or random-effects model is whether the sampling units (for which multiple observations are collected) represent the collection of most or all of the entities for which inference will be drawn. If so, the fixed-effects estimator is to be preferred. On the other hand, if those same sampling units represent a random sample from a larger population for which we wish to make inferences, then the random-effects estimator is more appropriate. […] Fixed- and random-effects models address unobserved heterogeneity. The random-effects model assumes that the panel-level effects are randomly distributed. The fixed-effects model assumes a constant disturbance that is a special case of the random-effects model. If the random-effects assumption is correct, then the random-effects estimator is more efficient than the fixed-effects estimator. If the random-effects assumption does not hold […], then the random effects model is not consistent. To help decide whether the fixed- or random-effects models is more appropriate, use the Durbin–Wu–Hausman3 test comparing coefficients from each model. […] Although fixed-effects estimators and random-effects estimators are referred to as subject-specific estimators, the GEEs available through PROC GENMOD in SAS or xtgee in Stata, are called population-averaged estimators. This label refers to the interpretation of the fitted regression coefficients. Subject-specific estimators are interpreted in terms of an effect for a given panel, whereas population-averaged estimators are interpreted in terms of an affect averaged over panels.”

“A favorite example in comparing subject-specific and population-averaged estimators is to consider the difference in interpretation of regression coefficients for a binary outcome model on whether a child will exhibit symptoms of respiratory illness. The predictor of interest is whether or not the child’s mother smokes. Thus, we have repeated observations on children and their mothers. If we were to fit a subject-specific model, we would interpret the coefficient on smoking as the change in likelihood of respiratory illness as a result of the mother switching from not smoking to smoking. On the other hand, the interpretation of the coefficient in a population-averaged model is the likelihood of respiratory illness for the average child with a nonsmoking mother compared to the likelihood for the average child with a smoking mother. Both models offer equally valid interpretations. The interpretation of interest should drive model selection; some studies ultimately will lead to fitting both types of models. […] In addition to model-based variance estimators, fixed-effects models and GEEs [Generalized Estimating Equation models] also admit modified sandwich variance estimators. SAS calls this the empirical variance estimator. Stata refers to it as the Robust Cluster estimator. Whatever the name, the most desirable property of the variance estimator is that it yields inference for the regression coefficients that is robust to misspecification of the correlation structure. […] Specification of GEEs should include careful consideration of reasonable correlation structure so that the resulting estimator is as efficient as possible. To protect against misspecification of the correlation structure, one should base inference on the modified sandwich variance estimator. This is the default estimator in SAS, but the user must specify it in Stata.”

“There are three main approaches to [model] validation: 1. Independent verification (obtained by waiting until the future arrives or through the use of surrogate variables). 2. Splitting the sample (using one part for calibration, the other for verification) 3. Resampling (taking repeated samples from the original sample and refitting the model each time).
Goodness of fit is no guarantee of predictive success. […] Splitting the sample into two parts, one for estimating the model parameters, the other for verification, is particularly appropriate for validating time series models in which the emphasis is on prediction or reconstruction. If the observations form a time series, the more recent observations should be reserved for validation purposes. Otherwise, the data used for validation should be drawn at random from the entire sample. Unfortunately, when we split the sample and use only a portion of it, the resulting estimates will be less precise. […] The proportion to be set aside for validation purposes will depend upon the loss function. If both the goodness-of-fit error in the calibration sample and the prediction error in the validation sample are based on mean-squared error, Picard and Berk [1990] report that we can minimize their sum by using between a quarter and a third of the sample for validation purposes.”

November 13, 2017 Posted by | Books, Statistics | Leave a comment

Organic Chemistry (II)

I have included some observations from the second half of the book below, as well as some links to topics covered.

“[E]nzymes are used routinely to catalyse reactions in the research laboratory, and for a variety of industrial processes involving pharmaceuticals, agrochemicals, and biofuels. In the past, enzymes had to be extracted from natural sources — a process that was both expensive and slow. But nowadays, genetic engineering can incorporate the gene for a key enzyme into the DNA of fast growing microbial cells, allowing the enzyme to be obtained more quickly and in far greater yield. Genetic engineering has also made it possible to modify the amino acids making up an enzyme. Such modified enzymes can prove more effective as catalysts, accept a wider range of substrates, and survive harsher reaction conditions. […] New enzymes are constantly being discovered in the natural world as well as in the laboratory. Fungi and bacteria are particularly rich in enzymes that allow them to degrade organic compounds. It is estimated that a typical bacterial cell contains about 3,000 enzymes, whereas a fungal cell contains 6,000. Considering the variety of bacterial and fungal species in existence, this represents a huge reservoir of new enzymes, and it is estimated that only 3 per cent of them have been investigated so far.”

“One of the most important applications of organic chemistry involves the design and synthesis of pharmaceutical agents — a topic that is defined as medicinal chemistry. […] In the 19th century, chemists isolated chemical components from known herbs and extracts. Their aim was to identify a single chemical that was responsible for the extract’s pharmacological effects — the active principle. […] It was not long before chemists synthesized analogues of active principles. Analogues are structures which have been modified slightly from the original active principle. Such modifications can often improve activity or reduce side effects. This led to the concept of the lead compound — a compound with a useful pharmacological activity that could act as the starting point for further research. […] The first half of the 20th century culminated in the discovery of effective antimicrobial agents. […] The 1960s can be viewed as the birth of rational drug design. During that period there were important advances in the design of effective anti-ulcer agents, anti-asthmatics, and beta-blockers for the treatment of high blood pressure. Much of this was based on trying to understand how drugs work at the molecular level and proposing theories about why some compounds were active and some were not.”

“[R]ational drug design was boosted enormously towards the end of the century by advances in both biology and chemistry. The sequencing of the human genome led to the identification of previously unknown proteins that could serve as potential drug targets. […] Advances in automated, small-scale testing procedures (high-throughput screening) also allowed the rapid testing of potential drugs. In chemistry, advances were made in X-ray crystallography and NMR spectroscopy, allowing scientists to study the structure of drugs and their mechanisms of action. Powerful molecular modelling software packages were developed that allowed researchers to study how a drug binds to a protein binding site. […] the development of automated synthetic methods has vastly increased the number of compounds that can be synthesized in a given time period. Companies can now produce thousands of compounds that can be stored and tested for pharmacological activity. Such stores have been called chemical libraries and are routinely tested to identify compounds capable of binding with a specific protein target. These advances have boosted medicinal chemistry research over the last twenty years in virtually every area of medicine.”

“Drugs interact with molecular targets in the body such as proteins and nucleic acids. However, the vast majority of clinically useful drugs interact with proteins, especially receptors, enzymes, and transport proteins […] Enzymes are […] important drug targets. Drugs that bind to the active site and prevent the enzyme acting as a catalyst are known as enzyme inhibitors. […] Enzymes are located inside cells, and so enzyme inhibitors have to cross cell membranes in order to reach them—an important consideration in drug design. […] Transport proteins are targets for a number of therapeutically important drugs. For example, a group of antidepressants known as selective serotonin reuptake inhibitors prevent serotonin being transported into neurons by transport proteins.”

“The main pharmacokinetic factors are absorption, distribution, metabolism, and excretion. Absorption relates to how much of an orally administered drug survives the digestive enzymes and crosses the gut wall to reach the bloodstream. Once there, the drug is carried to the liver where a certain percentage of it is metabolized by metabolic enzymes. This is known as the first-pass effect. The ‘survivors’ are then distributed round the body by the blood supply, but this is an uneven process. The tissues and organs with the richest supply of blood vessels receive the greatest proportion of the drug. Some drugs may get ‘trapped’ or sidetracked. For example fatty drugs tend to get absorbed in fat tissue and fail to reach their target. The kidneys are chiefly responsible for the excretion of drugs and their metabolites.”

“Having identified a lead compound, it is important to establish which features of the compound are important for activity. This, in turn, can give a better understanding of how the compound binds to its molecular target. Most drugs are significantly smaller than molecular targets such as proteins. This means that the drug binds to quite a small region of the protein — a region known as the binding site […]. Within this binding site, there are binding regions that can form different types of intermolecular interactions such as van der Waals interactions, hydrogen bonds, and ionic interactions. If a drug has functional groups and substituents capable of interacting with those binding regions, then binding can take place. A lead compound may have several groups that are capable of forming intermolecular interactions, but not all of them are necessarily needed. One way of identifying the important binding groups is to crystallize the target protein with the drug bound to the binding site. X-ray crystallography then produces a picture of the complex which allows identification of binding interactions. However, it is not always possible to crystallize target proteins and so a different approach is needed. This involves synthesizing analogues of the lead compound where groups are modified or removed. Comparing the activity of each analogue with the lead compound can then determine whether a particular group is important or not. This is known as an SAR study, where SAR stands for structure–activity relationships.” Once the important binding groups have been identified, the pharmacophore for the lead compound can be defined. This specifies the important binding groups and their relative position in the molecule.”

“One way of identifying the active conformation of a flexible lead compound is to synthesize rigid analogues where the binding groups are locked into defined positions. This is known as rigidification or conformational restriction. The pharmacophore will then be represented by the most active analogue. […] A large number of rotatable bonds is likely to have an adverse effect on drug activity. This is because a flexible molecule can adopt a large number of conformations, and only one of these shapes corresponds to the active conformation. […] In contrast, a totally rigid molecule containing the required pharmacophore will bind the first time it enters the binding site, resulting in greater activity. […] It is also important to optimize a drug’s pharmacokinetic properties such that it can reach its target in the body. Strategies include altering the drug’s hydrophilic/hydrophobic properties to improve absorption, and the addition of substituents that block metabolism at specific parts of the molecule. […] The drug candidate must [in general] have useful activity and selectivity, with minimal side effects. It must have good pharmacokinetic properties, lack toxicity, and preferably have no interactions with other drugs that might be taken by a patient. Finally, it is important that it can be synthesized as cheaply as possible”.

“Most drugs that have reached clinical trials for the treatment of Alzheimer’s disease have failed. Between 2002 and 2012, 244 novel compounds were tested in 414 clinical trials, but only one drug gained approval. This represents a failure rate of 99.6 per cent as against a failure rate of 81 per cent for anti-cancer drugs.”

“It takes about ten years and £160 million to develop a new pesticide […] The volume of global sales increased 47 per cent in the ten-year period between 2002 and 2012, while, in 2012, total sales amounted to £31 billion. […] In many respects, agrochemical research is similar to pharmaceutical research. The aim is to find pesticides that are toxic to ‘pests’, but relatively harmless to humans and beneficial life forms. The strategies used to achieve this goal are also similar. Selectivity can be achieved by designing agents that interact with molecular targets that are present in pests, but not other species. Another approach is to take advantage of any metabolic reactions that are unique to pests. An inactive prodrug could then be designed that is metabolized to a toxic compound in the pest, but remains harmless in other species. Finally, it might be possible to take advantage of pharmacokinetic differences between pests and other species, such that a pesticide reaches its target more easily in the pest. […] Insecticides are being developed that act on a range of different targets as a means of tackling resistance. If resistance should arise to an insecticide acting on one particular target, then one can switch to using an insecticide that acts on a different target. […] Several insecticides act as insect growth regulators (IGRs) and target the moulting process rather than the nervous system. In general, IGRs take longer to kill insects but are thought to cause less detrimental effects to beneficial insects. […] Herbicides control weeds that would otherwise compete with crops for water and soil nutrients. More is spent on herbicides than any other class of pesticide […] The synthetic agent 2,4-D […] was synthesized by ICI in 1940 as part of research carried out on biological weapons […] It was first used commercially in 1946 and proved highly successful in eradicating weeds in cereal grass crops such as wheat, maize, and rice. […] The compound […] is still the most widely used herbicide in the world.”

“The type of conjugated system present in a molecule determines the specific wavelength of light absorbed. In general, the more extended the conjugation, the higher the wavelength absorbed. For example, β-carotene […] is the molecule responsible for the orange colour of carrots. It has a conjugated system involving eleven double bonds, and absorbs light in the blue region of the spectrum. It appears red because the reflected light lacks the blue component. Zeaxanthin is very similar in structure to β-carotene, and is responsible for the yellow colour of corn. […] Lycopene absorbs blue-green light and is responsible for the red colour of tomatoes, rose hips, and berries. Chlorophyll absorbs red light and is coloured green. […] Scented molecules interact with olfactory receptors in the nose. […] there are around 400 different olfactory protein receptors in humans […] The natural aroma of a rose is due mainly to 2-phenylethanol, geraniol, and citronellol.”

“Over the last fifty years, synthetic materials have largely replaced natural materials such as wood, leather, wool, and cotton. Plastics and polymers are perhaps the most visible sign of how organic chemistry has changed society. […] It is estimated that production of global plastics was 288 million tons in 2012 […] Polymerization involves linking molecular strands called polymers […]. By varying the nature of the monomer, a huge range of different polymers can be synthesized with widely differing properties. The idea of linking small molecular building blocks into polymers is not a new one. Nature has been at it for millions of years using amino acid building blocks to make proteins, and nucleotide building blocks to make nucleic acids […] The raw materials for plastics come mainly from oil, which is a finite resource. Therefore, it makes sense to recycle or depolymerize plastics to recover that resource. Virtually all plastics can be recycled, but it is not necessarily economically feasible to do so. Traditional recycling of polyesters, polycarbonates, and polystyrene tends to produce inferior plastics that are suitable only for low-quality goods.”

Adipic acid.
Protease. Lipase. Amylase. Cellulase.
Reflectin.
Agonist.
Antagonist.
Prodrug.
Conformational change.
Process chemistry (chemical development).
Clinical trial.
Phenylbutazone.
Pesticide.
Dichlorodiphenyltrichloroethane.
Aldrin.
N-Methyl carbamate.
Organophosphates.
Pyrethrum.
Neonicotinoid.
Colony collapse disorder.
Ecdysone receptor.
Methoprene.
Tebufenozide.
Fungicide.
Quinone outside inhibitors (QoI).
Allelopathy.
Glyphosate.
11-cis retinal.
Chromophore.
Synthetic dyes.
Methylene blue.
Cryptochrome.
Pheromone.
Artificial sweeteners.
Miraculin.
Addition polymer.
Condensation polymer.
Polyethylene.
Polypropylene.
Polyvinyl chloride.
Bisphenol A.
Vulcanization.
Kevlar.
Polycarbonate.
Polyhydroxyalkanoates.
Bioplastic.
Nanochemistry.
Allotropy.
Allotropes of carbon.
Carbon nanotube.
Rotaxane.
π-interactions.
Molecular switch.

November 11, 2017 Posted by | Biology, Books, Botany, Chemistry, Medicine, Pharmacology, Zoology | Leave a comment

Quotes

i. “Much of the skill in doing science resides in knowing where in the hierarchy you are looking – and, as a consequence, what is relevant and what is not.” (Philip Ball – Molecules: A very Short Introduction)

ii. “…statistical software will no more make one a statistician than a scalpel will turn one into a neurosurgeon. Allowing these tools to do our thinking is a sure recipe for disaster.” (Philip Good & James Hardin, Common Errors in Statistics (and how to avoid them))

iii. “Just as 95% of research efforts are devoted to data collection, 95% of the time remaining should be spent on ensuring that the data collected warrant analysis.” (-ll-)

iv. “One reason why many statistical models are incomplete is that they do not specify the sources of randomness generating variability among agents, i.e., they do not specify why otherwise observationally identical people make different choices and have different outcomes given the same choice.” (James J. Heckman, -ll-)

v. “If a thing is not worth doing, it is not worth doing well.” (J. W. Tukey, -ll-)

vi. “Hypocrisy is the lubricant of society.” (David Hull)

vii. “Every time I fire a linguist, the performance of our speech recognition system goes up.” (Fred Jelinek)

viii. “For most of my life, one of the persons most baffled by my own work was myself.” (Benoît Mandelbrot)

ix. “I’m afraid love is just a word.” (Harry Mulisch)

x. “The worst thing about death is that you once were, and now you are not.” (José Saramago)

xi. “Sometimes the most remarkable things seem commonplace. I mean, when you think about it, jet travel is pretty freaking remarkable. You get in a plane, it defies the gravity of an entire planet by exploiting a loophole with air pressure, and it flies across distances that would take months or years to cross by any means of travel that has been significant for more than a century or three. You hurtle above the earth at enough speed to kill you instantly should you bump into something, and you can only breathe because someone built you a really good tin can that has seams tight enough to hold in a decent amount of air. Hundreds of millions of man-hours of work and struggle and research, blood, sweat, tears, and lives have gone into the history of air travel, and it has totally revolutionized the face of our planet and societies.
But get on any flight in the country, and I absolutely promise you that you will find someone who, in the face of all that incredible achievement, will be willing to complain about the drinks. The drinks, people.” (Jim Butcher, Summer Knight)

xii. “The best way to keep yourself from doing something grossly self-destructive and stupid is to avoid the temptation to do it. For example, it is far easier to fend off inappropriate amorous desires if one runs screaming from the room every time a pretty girl comes in.” (Jim Butcher, Proven Guilty)

xiii. “One certain effect of war is to diminish freedom of expression. Patriotism becomes the order of the day, and those who question the war are seen as traitors, to be silenced and imprisoned.” (Howard Zinn)

xiv. “While inexact models may mislead, attempting to allow for every contingency a priori is impractical. Thus models must be built by an iterative feedback process in which an initial parsimonious model may be modified when diagnostic checks applied to residuals indicate the need.” (G. E. P. Box)

xv. “In our analysis of complex systems (like the brain and language) we must avoid the trap of trying to find master keys. Because of the mechanisms by which complex systems structure themselves, single principles provide inadequate descriptions. We should rather be sensitive to complex and self-organizing interactions and appreciate the play of patterns that perpetually transforms the system itself as well as the environment in which it operates.” (Paul Cilliers)

xvi. “The nature of the chemical bond is the problem at the heart of all chemistry.” (Bryce Crawford)

xvii. “When there’s a will to fail, obstacles can be found.” (John McCarthy)

xviii. “We understand human mental processes only slightly better than a fish understands swimming.” (-ll-)

xix. “He who refuses to do arithmetic is doomed to talk nonsense.” (-ll-)

xx. “The trouble with men is that they have limited minds. That’s the trouble with women, too.” (Joanna Russ)

 

November 10, 2017 Posted by | Books, Quotes/aphorisms | Leave a comment

Organic Chemistry (I)

This book‘s a bit longer than most ‘A very short introduction to…‘ publications, and it’s quite dense at times and included a lot of interesting stuff. It took me a while to finish it as I put it away a while back when I hit some of the more demanding content, but I did pick it up later and I really enjoyed most of the coverage. In the end I decided that I wouldn’t be doing the book justice if I were to limit my coverage of it to just one post, so this will be only the first of two posts of coverage of this book, covering roughly the first half of it.

As usual I have included in my post both some observations from the book (…and added a few links to these quotes where I figured they might be helpful) as well as some wiki links to topics discussed in the book.

“Organic chemistry is a branch of chemistry that studies carbon-based compounds in terms of their structure, properties, and synthesis. In contrast, inorganic chemistry covers the chemistry of all the other elements in the periodic table […] carbon-based compounds are crucial to the chemistry of life. [However] organic chemistry has come to be defined as the chemistry of carbon-based compounds, whether they originate from a living system or not. […] To date, 16 million compounds have been synthesized in organic chemistry laboratories across the world, with novel compounds being synthesized every day. […] The list of commodities that rely on organic chemistry include plastics, synthetic fabrics, perfumes, colourings, sweeteners, synthetic rubbers, and many other items that we use every day.”

“For a neutral carbon atom, there are six electrons occupying the space around the nucleus […] The electrons in the outer shell are defined as the valence electrons and these determine the chemical properties of the atom. The valence electrons are easily ‘accessible’ compared to the two electrons in the first shell. […] There is great significance in carbon being in the middle of the periodic table. Elements which are close to the left-hand side of the periodic table can lose their valence electrons to form positive ions. […] Elements on the right-hand side of the table can gain electrons to form negatively charged ions. […] The impetus for elements to form ions is the stability that is gained by having a full outer shell of electrons. […] Ion formation is feasible for elements situated to the left or the right of the periodic table, but it is less feasible for elements in the middle of the table. For carbon to gain a full outer shell of electrons, it would have to lose or gain four valence electrons, but this would require far too much energy. Therefore, carbon achieves a stable, full outer shell of electrons by another method. It shares electrons with other elements to form bonds. Carbon excels in this and can be considered chemistry’s ultimate elemental socialite. […] Carbon’s ability to form covalent bonds with other carbon atoms is one of the principle reasons why so many organic molecules are possible. Carbon atoms can be linked together in an almost limitless way to form a mind-blowing variety of carbon skeletons. […] carbon can form a bond to hydrogen, but it can also form bonds to atoms such as nitrogen, phosphorus, oxygen, sulphur, fluorine, chlorine, bromine, and iodine. As a result, organic molecules can contain a variety of different elements. Further variety can arise because it is possible for carbon to form double bonds or triple bonds to a variety of other atoms. The most common double bonds are formed between carbon and oxygen, carbon and nitrogen, or between two carbon atoms. […] The most common triple bonds are found between carbon and nitrogen, or between two carbon atoms.”

[C]hirality has huge importance. The two enantiomers of a chiral molecule behave differently when they interact with other chiral molecules, and this has important consequences in the chemistry of life. As an analogy, consider your left and right hands. These are asymmetric in shape and are non-superimposable mirror images. Similarly, a pair of gloves are non-superimposable mirror images. A left hand will fit snugly into a left-hand glove, but not into a right-hand glove. In the molecular world, a similar thing occurs. The proteins in our bodies are chiral molecules which can distinguish between the enantiomers of other molecules. For example, enzymes can distinguish between the two enantiomers of a chiral compound and catalyse a reaction with one of the enantiomers but not the other.”

“A key concept in organic chemistry is the functional group. A functional group is essentially a distinctive arrangement of atoms and bonds. […] Functional groups react in particular ways, and so it is possible to predict how a molecule might react based on the functional groups that are present. […] it is impossible to build a molecule atom by atom. Instead, target molecules are built by linking up smaller molecules. […] The organic chemist needs to have a good understanding of the reactions that are possible between different functional groups when choosing the molecular building blocks to be used for a synthesis. […] There are many […] reasons for carrying out FGTs [functional group transformations], especially when synthesizing complex molecules. For example, a starting material or a synthetic intermediate may lack a functional group at a key position of the molecular structure. Several reactions may then be required to introduce that functional group. On other occasions, a functional group may be added to a particular position then removed at a later stage. One reason for adding such a functional group would be to block an unwanted reaction at that position of the molecule. Another common situation is where a reactive functional group is converted to a less reactive functional group such that it does not interfere with a subsequent reaction. Later on, the original functional group is restored by another functional group transformation. This is known as a protection/deprotection strategy. The more complex the target molecule, the greater the synthetic challenge. Complexity is related to the number of rings, functional groups, substituents, and chiral centres that are present. […] The more reactions that are involved in a synthetic route, the lower the overall yield. […] retrosynthesis is a strategy by which organic chemists design a synthesis before carrying it out in practice. It is called retrosynthesis because the design process involves studying the target structure and working backwards to identify how that molecule could be synthesized from simpler starting materials. […] a key stage in retrosynthesis is identifying a bond that can be ‘disconnected’ to create those simpler molecules.”

“[V]ery few reactions produce the spectacular visual and audible effects observed in chemistry demonstrations. More typically, reactions involve mixing together two colourless solutions to produce another colourless solution. Temperature changes are a bit more informative. […] However, not all reactions generate heat, and monitoring the temperature is not a reliable way of telling whether the reaction has gone to completion or not. A better approach is to take small samples of the reaction solution at various times and to test these by chromatography or spectroscopy. […] If a reaction is taking place very slowly, different reaction conditions could be tried to speed it up. This could involve heating the reaction, carrying out the reaction under pressure, stirring the contents vigorously, ensuring that the reaction is carried out in a dry atmosphere, using a different solvent, using a catalyst, or using one of the reagents in excess. […] There are a large number of variables that can affect how efficiently reactions occur, and organic chemists in industry are often employed to develop the ideal conditions for a specific reaction. This is an area of organic chemistry known as chemical development. […] Once a reaction has been carried out, it is necessary to isolate and purify the reaction product. This often proves more time-consuming than carrying out the reaction itself. Ideally, one would remove the solvent used in the reaction and be left with the product. However, in most reactions this is not possible as other compounds are likely to be present in the reaction mixture. […] it is usually necessary to carry out procedures that will separate and isolate the desired product from these other compounds. This is known as ‘working up’ the reaction.”

“Proteins are large molecules (macromolecules) which serve a myriad of purposes, and are essentially polymers constructed from molecular building blocks called amino acids […]. In humans, there are twenty different amino acids having the same ‘head group’, consisting of a carboxylic acid and an amine attached to the same carbon atom […] The amino acids are linked up by the carboxylic acid of one amino acid reacting with the amine group of another to form an amide link. Since a protein is being produced, the amide bond is called a peptide bond, and the final protein consists of a polypeptide chain (or backbone) with different side chains ‘hanging off’ the chain […]. The sequence of amino acids present in the polypeptide sequence is known as the primary structure. Once formed, a protein folds into a specific 3D shape […] Nucleic acids […] are another form of biopolymer, and are formed from molecular building blocks called nucleotides. These link up to form a polymer chain where the backbone consists of alternating sugar and phosphate groups. There are two forms of nucleic acid — deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). In DNA, the sugar is deoxyribose , whereas the sugar in RNA is ribose. Each sugar ring has a nucleic acid base attached to it. For DNA, there are four different nucleic acid bases called adenine (A), thymine (T), cytosine (C), and guanine (G) […]. These bases play a crucial role in the overall structure and function of nucleic acids. […] DNA is actually made up of two DNA strands […] where the sugar-phosphate backbones are intertwined to form a double helix. The nucleic acid bases point into the centre of the helix, and each nucleic acid base ‘pairs up’ with a nucleic acid base on the opposite strand through hydrogen bonding. The base pairing is specifically between adenine and thymine, or between cytosine and guanine. This means that one polymer strand is complementary to the other, a feature that is crucial to DNA’s function as the storage molecule for genetic information. […]  [E]ach strand […] act as the template for the creation of a new strand to produce two identical ‘daughter’ DNA double helices […] [A] genetic alphabet of four letters (A, T, G, C) […] code for twenty amino acids. […] [A]n amino acid is coded, not by one nucleotide, but by a set of three. The number of possible triplet combinations using four ‘letters’ is more than enough to encode all the amino acids.”

“Proteins have a variety of functions. Some proteins, such as collagen, keratin, and elastin, have a structural role. Others catalyse life’s chemical reactions and are called enzymes. They have a complex 3D shape, which includes a cavity called the active site […]. This is where the enzyme binds the molecules (substrates) that undergo the enzyme-catalysed reaction. […] A substrate has to have the correct shape to fit an enzyme’s active site, but it also needs binding groups to interact with that site […]. These interactions hold the substrate in the active site long enough for a reaction to occur, and typically involve hydrogen bonds, as well as van der Waals and ionic interactions. When a substrate binds, the enzyme normally undergoes an induced fit. In other words, the shape of the active site changes slightly to accommodate the substrate, and to hold it as tightly as possible. […] Once a substrate is bound to the active site, amino acids in the active site catalyse the subsequent reaction.”

“Proteins called receptors are involved in chemical communication between cells and respond to chemical messengers called neurotransmitters if they are released from nerves, or hormones if they are released by glands. Most receptors are embedded in the cell membrane, with part of their structure exposed on the outer surface of the cell membrane, and another part exposed on the inner surface. On the outer surface they contain a binding site that binds the molecular messenger. An induced fit then takes place that activates the receptor. This is very similar to what happens when a substrate binds to an enzyme […] The induced fit is crucial to the mechanism by which a receptor conveys a message into the cell — a process known as signal transduction. By changing shape, the protein initiates a series of molecular events that influences the internal chemistry within the cell. For example, some receptors are part of multiprotein complexes called ion channels. When the receptor changes shape, it causes the overall ion channel to change shape. This opens up a central pore allowing ions to flow across the cell membrane. The ion concentration within the cell is altered, and that affects chemical reactions within the cell, which ultimately lead to observable results such as muscle contraction. Not all receptors are membrane-bound. For example, steroid receptors are located within the cell. This means that steroid hormones need to cross the cell membrane in order to reach their target receptors. Transport proteins are also embedded in cell membranes and are responsible for transporting polar molecules such as amino acids into the cell. They are also important in controlling nerve action since they allow nerves to capture released neurotransmitters, such that they have a limited period of action.”

“RNA […] is crucial to protein synthesis (translation). There are three forms of RNA — messenger RNA (mRNA), transfer RNA (tRNA), and ribosomal RNA (rRNA). mRNA carries the genetic code for a particular protein from DNA to the site of protein production. Essentially, mRNA is a single-strand copy of a specific section of DNA. The process of copying that information is known as transcription. tRNA decodes the triplet code on mRNA by acting as a molecular adaptor. At one end of tRNA, there is a set of three bases (the anticodon) that can base pair to a set of three bases on mRNA (the codon). An amino acid is linked to the other end of the tRNA and the type of amino acid present is related to the anticodon that is present. When tRNA with the correct anticodon base pairs to the codon on mRNA, it brings the amino acid encoded by that codon. rRNA is a major constituent of a structure called a ribosome, which acts as the factory for protein production. The ribosome binds mRNA then coordinates and catalyses the translation process.”

Organic chemistry.
Carbon.
Stereochemistry.
Delocalization.
Hydrogen bond.
Van der Waals forces.
Ionic bonding.
Chemoselectivity.
Coupling reaction.
Chemical polarity.
Crystallization.
Elemental analysis.
NMR spectroscopy.
Polymerization.
Miller–Urey experiment.
Vester-Ulbricht hypothesis.
Oligonucleotide.
RNA world.
Ribozyme.

November 9, 2017 Posted by | Biology, Books, Chemistry, Genetics | Leave a comment

Words

Most of the words below are words which I encountered while reading the Jim Butcher novels: Fool Moon, Grave Peril, Summer Knight, Death Masks, Blood Rites, Dead Beat, and Proven Guilty.

Gobbet. Corrugate. Whuff. Wino. Shinny. Ruff. Rubberneck. Pastel. Sidhe. Appellation. Tine. Clomp. Susurration. Bier. Pucker. Haft. Topiary. Tendril. Pommel. Swath.

Chitter. Wispy. Flinders. Ewer. Incongruous. Athame. Bole. Chitin. Prancy. Doily. Garland. Heft. Hod. Klaxon. Ravening. Juke. Schlep. Pew. Gaggle. Passel.

Scourge. Coven. Wetwork. Gofer. Hinky. Pratfall. Parti-color(ed). Clawhammer. Mesquite. Scion. Traction. Kirtle. Avaunt. Imbibe. Betimes. Dinky. Rebar. Maw. Strident. Mangel.

GeodePanacheLuminance. WickSusurrus. ChuffWhammy. Cuss. Ripsaw. Scrunch. Fain. Hygroscopicity. Anasarca. Bitumen. Lingula. Diaphoretic. Ketch. Callipygian. Defalcation. Serried.

November 7, 2017 Posted by | Books, Language | Leave a comment

Common Errors in Statistics… (II)

Some more observations from the book below:

“[A] multivariate test, can be more powerful than a test based on a single variable alone, providing the additional variables are relevant. Adding variables that are unlikely to have value in discriminating among the alternative hypotheses simply because they are included in the dataset can only result in a loss of power. Unfortunately, what works when making a comparison between two populations based on a single variable fails when we attempt a multivariate comparison. Unless the data are multivariate normal, Hötelling’s T2, the multivariate analog of Student’s t, will not provide tests with the desired significance level. Only samples far larger than those we are likely to afford in practice are likely to yield multi-variate results that are close to multivariate normal. […] [A]n exact significance level can [however] be obtained in the multivariate case regardless of the underlying distribution by making use of the permutation distribution of Hötelling’s T2.”

“If you are testing against a one-sided alternative, for example, no difference versus improvement, then you require a one-tailed or one-sided test. If you are doing a head-to-head comparison — which alternative is best? — then a two-tailed test is required. […] A comparison of two experimental effects requires a statistical test on their difference […]. But in practice, this comparison is often based on an incorrect procedure involving two separate tests in which researchers conclude that effects differ when one effect is significant (p < 0.05) but the other is not (p > 0.05). Nieuwenhuis, Forstmann, and Wagenmakers [2011] reviewed 513 behavioral, systems, and cognitive neuroscience articles in five top-ranking journals and found that 78 used the correct procedure and 79 used the incorrect procedure. […] When the logic of a situation calls for demonstration of similarity rather than differences among responses to various treatments, then equivalence tests are often more relevant than tests with traditional no-effect null hypotheses […] Two distributions F and G, such that G[x] = F[x − δ], are said to be equivalent providing |δ| < Δ, where Δ is the smallest difference of clinical significance. To test for equivalence, we obtain a confidence interval for δ, rejecting equivalence only if this interval contains values in excess of |Δ|. The width of a confidence interval decreases as the sample size increases; thus, a very large sample may be required to demonstrate equivalence just as a very large sample may be required to demonstrate a clinically significant effect.”

“The most common test for comparing the means of two populations is based upon Student’s t. For Student’s t-test to provide significance levels that are exact rather than approximate, all the observations must be independent and, under the null hypothesis, all the observations must come from identical normal distributions. Even if the distribution is not normal, the significance level of the t-test is almost exact for sample sizes greater than 12; for most of the distributions one encounters in practice,5 the significance level of the t-test is usually within a percent or so of the correct value for sample sizes between 6 and 12. For testing against nonnormal alternatives, more powerful tests than the t-test exist. For example, a permutation test replacing the original observations with their normal scores is more powerful than the t-test […]. Permutation tests are derived by looking at the distribution of values the test statistic would take for each of the possible assignments of treatments to subjects. For example, if in an experiment two treatments were assigned at random to six subjects so that three subjects got one treatment and three the other, there would have been a total of 20 possible assignments of treatments to subjects.6 To determine a p-value, we compute for the data in hand each of the 20 possible values the test statistic might have taken. We then compare the actual value of the test statistic with these 20 values. If our test statistic corresponds to the most extreme value, we say that p = 1/20 = 0.05 (or 1/10 = 0.10 if this is a two-tailed permutation test). Against specific normal alternatives, this two-sample permutation test provides a most powerful unbiased test of the distribution-free hypothesis that the centers of the two distributions are the same […]. Violation of assumptions can affect not only the significance level of a test but the power of the test […] For example, although the significance level of the t-test is robust to departures from normality, the power of the t-test is not.”

“Group randomized trials (GRTs) in public health research typically use a small number of randomized groups with a relatively large number of participants per group. Typically, some naturally occurring groups are targeted: work sites, schools, clinics, neighborhoods, even entire towns or states. A group can be assigned to either the intervention or control arm but not both; thus, the group is nested within the treatment. This contrasts with the approach used in multicenter clinical trials, in which individuals within groups (treatment centers) may be assigned to any treatment. GRTs are characterized by a positive correlation of outcomes within a group and by the small number of groups. Feng et al. [2001] report a positive intraclass correlation (ICC) between the individuals’ target-behavior outcomes within the same group. […] The variance inflation factor (VIF) as a result of such commonalities is 1 + (n − 1)σ. […] Although σ in GRTs is usually quite small, the VIFs could still be quite large because VIF is a function of the product of the correlation and group size n. […] To be appropriate, an analysis method of GRTs need acknowledge both the ICC and the relatively small number of groups.”

“Recent simulations reveal that the classic test based on Pearson correlation is almost distribution free [Good, 2009]. Still, too often we treat a test of the correlation between two variables X and Y as if it were a test of their independence. X and Y can have a zero correlation coefficient, yet be totally dependent (for example, Y = X2). Even when the expected value of Y is independent of the expected value of X, the variance of Y might be directly proportional to the variance of X.”

“[O]ne of the most common statistical errors is to assume that because an effect is not statistically significant it does not exist. One of the most common errors in using the analysis of variance is to assume that because a factor such as sex does not yield a significant p-value that we may eliminate it from the model. […] The process of eliminating nonsignificant factors one by one from an analysis of variance means that we are performing a series of tests rather than a single test; thus, the actual significance level is larger than the declared significance level.”

“The greatest error associated with the use of statistical procedures is to make the assumption that one single statistical methodology can suffice for all applications. From time to time, a new statistical procedure will be introduced or an old one revived along with the assertion that at last the definitive solution has been found. […] Every methodology [however] has a proper domain of application and another set of applications for which it fails. Every methodology has its drawbacks and its advantages, its assumptions and its sources of error.”

“[T]o use the bootstrap or any other statistical methodology effectively, one has to be aware of its limitations. The bootstrap is of value in any situation in which the sample can serve as a surrogate for the population. If the sample is not representative of the population because the sample is small or biased, not selected at random, or its constituents are not independent of one another, then the bootstrap will fail. […] When using Bayesian methods[:] Do not use an arbitrary prior. Never report a p-value. Incorporate potential losses in the decision. Report the Bayes’ factor. […] In performing a meta-analysis, we need to distinguish between observational studies and randomized trials. Confounding and selection bias can easily distort the findings from observational studies. […] Publication and selection bias also plague the meta-analysis of completely randomized trials. […] One can not incorporate in a meta-analysis what one is not aware of. […] Similarly, the decision as to which studies to incorporate can dramatically affect the results. Meta-analyses of the same issue may reach opposite conclusions […] Where there are substantial differences between the different studies incorporated in a meta-analysis (their subjects or their environments), or substantial quantitative differences in the results from the different trials, a single overall summary estimate of treatment benefit has little practical applicability […]. Any analysis that ignores this heterogeneity is clinically misleading and scientifically naive […]. Heterogeneity should be scrutinized, with an attempt to explain it […] Bayesian methods can be effective in meta-analyses […]. In such situations, the parameters of various trials are considered to be random samples from a distribution of trial parameters. The parameters of this higher-level distribution are called hyperparameters, and they also have distributions. The model is called hierarchical. The extent to which the various trials reinforce each other is determined by the data. If the trials are very similar, the variation of the hyperparameters will be small, and the analysis will be very close to a classical meta-analysis. If the trials do not reinforce each other, the conclusions of the hierarchical Bayesian analysis will show a very high variance in the results. A hierarchical Bayesian analysis avoids the necessity of a prior decision as to whether the trials can be combined; the extent of the combination is determined purely by the data. This does not come for free; in contrast to the meta-analyses discussed above, all the original data (or at least the sufficient statistics) must be available for inclusion in the hierarchical model. The Bayesian method is also vulnerable to […] selection bias”.

“For small samples of three to five observations, summary statistics are virtually meaningless. Reproduce the actual observations; this is easier to do and more informative. Though the arithmetic mean or average is in common use for summarizing measurements, it can be very misleading. […] When the arithmetic mean is meaningful, it is usually equal to or close to the median. Consider reporting the median in the first place. The geometric mean is more appropriate than the arithmetic in three sets of circumstances: 1. When losses or gains can best be expressed as a percentage rather than a fixed value. 2. When rapid growth is involved, as is the case with bacterial and viral populations. 3. When the data span several orders of magnitude, as with the concentration of pollutants. […] Most populations are actually mixtures of populations. If multiple modes are observed in samples greater than 25 in size, the number of modes should be reported. […] The terms dispersion, precision, and accuracy are often confused. Dispersion refers to the variation within a sample or a population. Standard measures of dispersion include the variance, the mean absolute deviation, the interquartile range, and the range. Precision refers to how close several estimates based upon successive samples will come to one another, whereas accuracy refers to how close an estimate based on a sample will come to the population parameter it is estimating.”

“One of the most egregious errors in statistics, one encouraged, if not insisted upon by the editors of journals in the biological and social sciences, is the use of the notation “Mean ± Standard Error” to report the results of a set of observations. The standard error is a useful measure of population dispersion if the observations are continuous measurements that come from a normal or Gaussian distribution. […] But if the observations come from a nonsymmetric distribution such as an exponential or a Poisson, or a truncated distribution such as the uniform, or a mixture of populations, we cannot draw any such inference. Recall that the standard error equals the standard deviation divided by the square root of the sample size […] As the standard error depends on the squares of individual observations, it is particularly sensitive to outliers. A few extreme or outlying observations will have a dramatic impact on its value. If you can not be sure your observations come from a normal distribution, then consider reporting your results either in the form of a histogram […] or a Box and Whiskers plot […] If the underlying distribution is not symmetric, the use of the ± SE notation can be deceptive as it suggests a nonexistent symmetry. […] When the estimator is other than the mean, we cannot count on the Central Limit Theorem to ensure a symmetric sampling distribution. We recommend that you use the bootstrap whenever you report an estimate of a ratio or dispersion. […] If you possess some prior knowledge of the shape of the population distribution, you should take advantage of that knowledge by using a parametric bootstrap […]. The parametric bootstrap is particularly recommended for use in determining the precision of percentiles in the tails (P20, P10, P90, and so forth).”

“A common error is to misinterpret the confidence interval as a statement about the unknown parameter. It is not true that the probability that a parameter is included in a 95% confidence interval is 95%. What is true is that if we derive a large number of 95% confidence intervals, we can expect the true value of the parameter to be included in the computed intervals 95% of the time. (That is, the true values will be included if the assumptions on which the tests and confidence intervals are based are satisfied 100% of the time.) Like the p-value, the upper and lower confidence limits of a particular confidence interval are random variables, for they depend upon the sample that is drawn. […] In interpreting a confidence interval based on a test of significance, it is essential to realize that the center of the interval is no more likely than any other value, and the confidence to be placed in the interval is no greater than the confidence we have in the experimental design and statistical test it is based upon.”

“How accurate our estimates are and how consistent they will be from sample to sample will depend upon the nature of the error terms. If none of the many factors that contribute to the value of ε make more than a small contribution to the total, then ε will have a Gaussian distribution. If the {εi} are independent and normally distributed (Gaussian), then the ordinary least-squares estimates of the coefficients produced by most statistical software will be unbiased and have minimum variance. These desirable properties, indeed the ability to obtain coefficient values that are of use in practical applications, will not be present if the wrong model has been adopted. They will not be present if successive observations are dependent. The values of the coefficients produced by the software will not be of use if the associated losses depend on some function of the observations other than the sum of the squares of the differences between what is observed and what is predicted. In many practical problems, one is more concerned with minimizing the sum of the absolute values of the differences or with minimizing the maximum prediction error. Finally, if the error terms come from a distribution that is far from Gaussian, a distribution that is truncated, flattened or asymmetric, the p-values and precision estimates produced by the software may be far from correct.”

“I have attended far too many biology conferences at which speakers have used a significant linear regression of one variable on another as “proof” of a “linear” relationship or first-order behavior. […] The unfortunate fact, which should not be forgotten, is that if EY = a f[X], where f is a monotonically, increasing function of X, then any attempt to fit the equation Y = bg[X], where g is also a monotonically increasing function of X, will result in a value of b that is significantly different from zero. The “trick,” […] is in selecting an appropriate (cause-and-effect-based) functional form g to begin with. Regression methods and expensive software will not find the correct form for you.”

November 4, 2017 Posted by | Books, Statistics | Leave a comment

Common Errors in Statistics…

“Pressed by management or the need for funding, too many research workers have no choice but to go forward with data analysis despite having insufficient statistical training. Alas, though a semester or two of undergraduate statistics may develop familiarity with the names of some statistical methods, it is not enough to be aware of all the circumstances under which these methods may be applicable.

The purpose of the present text is to provide a mathematically rigorous but readily understandable foundation for statistical procedures. Here are such basic concepts in statistics as null and alternative hypotheses, p-value, significance level, and power. Assisted by reprints from the statistical literature, we reexamine sample selection, linear regression, the analysis of variance, maximum likelihood, Bayes’ Theorem, meta-analysis and the bootstrap. New to this edition are sections on fraud and on the potential sources of error to be found in epidemiological and case-control studies.

Examples of good and bad statistical methodology are drawn from agronomy, astronomy, bacteriology, chemistry, criminology, data mining, epidemiology, hydrology, immunology, law, medical devices, medicine, neurology, observational studies, oncology, pricing, quality control, seismology, sociology, time series, and toxicology. […] Lest the statisticians among you believe this book is too introductory, we point out the existence of hundreds of citations in statistical literature calling for the comprehensive treatment we have provided. Regardless of past training or current specialization, this book will serve as a useful reference; you will find applications for the information contained herein whether you are a practicing statistician or a well-trained scientist who just happens to apply statistics in the pursuit of other science.”

I’ve been reading this book. I really like it so far, this is a nice book. A lot of the stuff included is review, but there are of course also some new ideas here and there (for example I’d never heard about Stein’s paradox before) and given how much stuff you need to remember and keep in mind in order not to make silly mistakes when analyzing data or interpreting the results of statistical analyses, occasional reviews of these things is probably a very good idea.

I have added some more observations from the first 100 pages or so below:

“Test only relevant null hypotheses. The null hypothesis has taken on an almost mythic role in contemporary statistics. Obsession with the null (more accurately spelled and pronounced nil), has been allowed to shape the direction of our research. […] Virtually any quantifiable hypothesis can be converted into null form. There is no excuse and no need to be content with a meaningless nil. […] we need to have an alternative hypothesis or alternatives firmly in mind when we set up a test. Too often in published research, such alternative hypotheses remain unspecified or, worse, are specified only after the data are in hand. We must specify our alternatives before we commence an analysis, preferably at the same time we design our study. Are our alternatives one-sided or two-sided? If we are comparing several populations at the same time, are their means ordered or unordered? The form of the alternative will determine the statistical procedures we use and the significance levels we obtain. […] The critical values and significance levels are quite different for one-tailed and two-tailed tests and, all too often, the wrong test has been employed in published work. McKinney et al. [1989] reviewed some 70-plus articles that appeared in six medical journals. In over half of these articles, Fisher’s exact test was applied improperly. Either a one-tailed test had been used when a two-tailed test was called for or the authors of the paper simply had not bothered to state which test they had used. […] the F-ratio and the chi-square are what are termed omnibus tests, designed to be sensitive to all possible alternatives. As such, they are not particularly sensitive to ordered alternatives such “as more fertilizer equals more growth” or “more aspirin equals faster relief of headache.” Tests for such ordered responses at k distinct treatment levels should properly use the Pitman correlation“.

“Before we initiate data collection, we must have a firm idea of what we will measure and how we will measure it. A good response variable

  • Is easy to record […]
  • Can be measured objectively on a generally accepted scale.
  • Is measured in appropriate units.
  • Takes values over a sufficiently large range that discriminates well.
  • Is well defined. […]
  • Has constant variance over the range used in the experiment (Bishop and Talbot, 2001).”

“A second fundamental principle is also applicable to both experiments and surveys: Collect exact values whenever possible. Worry about grouping them in intervals or discrete categories later.”

“Sample size must be determined for each experiment; there is no universally correct value. We need to understand and make use of the relationships among effect size, sample size, significance level, power, and the precision of our measuring instruments. Increase the precision (and hold all other parameters fixed) and we can decrease the required number of observations. Decreases in any or all of the intrinsic and extrinsic sources of variation will also result in a decrease in the required number. […] The smallest effect size of practical interest may be determined through consultation with one or more domain experts. The smaller this value, the greater the number of observations that will be required. […] Strictly speaking, the significance level and power should be chosen so as to minimize the overall cost of any project, balancing the cost of sampling with the costs expected from Type I and Type II errors. […] When determining sample size for data drawn from the binomial or any other discrete distribution, one should always display the power curve. […] As a result of inspecting the power curve by eye, you may come up with a less-expensive solution than your software. […] If the data do not come from a well-tabulated distribution, then one might use a bootstrap to estimate the power and significance level. […] Many researchers today rely on menu-driven software to do their power and sample-size calculations. Most such software comes with default settings […] — settings that are readily altered, if, that is, investigators bother to take the time.”

“The relative ease with which a program like Stata […] can produce a sample size may blind us to the fact that the number of subjects with which we begin a study may bear little or no relation to the number with which we conclude it. […] Potential subjects can and do refuse to participate. […] Worse, they may agree to participate initially, then drop out at the last minute […]. They may move without a forwarding address before a scheduled follow-up, or may simply do not bother to show up for an appointment. […] The key to a successful research program is to plan for such drop-outs in advance and to start the trials with some multiple of the number required to achieve a given power and significance level. […] it is the sample you end with, not the sample you begin with, that determines the power of your tests. […] An analysis of those who did not respond to a survey or a treatment can sometimes be as or more informative than the survey itself. […] Be sure to incorporate in your sample design and in your budget provisions for sampling nonresponders.”

“[A] randomly selected sample may not be representative of the population as a whole. For example, if a minority comprises less than 10% of a population, then a jury of 12 persons selected at random from that population will fail to contain a single member of that minority at least 28% of the time.”

“The proper starting point for the selection of the best method of estimation is with the objectives of our study: What is the purpose of our estimate? If our estimate is θ* and the actual value of the unknown parameter is θ, what losses will we be subject to? It is difficult to understand the popularity of the method of maximum likelihood and other estimation procedures that do not take these losses into consideration. The majority of losses will be monotonically nondecreasing in nature, that is, the further apart the estimate θ* and the true value θ, the larger our losses are likely to be. Typical forms of the loss function are the absolute deviation |θ* – θ|, the square deviation (θ* − θ)2, and the jump, that is, no loss if |θ* − θ| < i, and a big loss otherwise. Or the loss function may resemble the square deviation but take the form of a step function increasing in discrete increments. Desirable estimators are impartial, consistent, efficient, robust, and minimum loss. […] Interval estimates are to be preferred to point estimates; they are less open to challenge for they convey information about the estimate’s precision.”

“Estimators should be consistent, that is, the larger the sample, the greater the probability the resultant estimate will be close to the true population value. […] [A] consistent estimator […] is to be preferred to another if the first consistent estimator can provide the same degree of accuracy with fewer observations. To simplify comparisons, most statisticians focus on the asymptotic relative efficiency (ARE), defined as the limit with increasing sample size of the ratio of the number of observations required for each of two consistent statistical procedures to achieve the same degree of accuracy. […] Estimators that are perfectly satisfactory for use with symmetric, normally distributed populations may not be as desirable when the data come from nonsymmetric or heavy-tailed populations, or when there is a substantial risk of contamination with extreme values. When estimating measures of central location, one way to create a more robust estimator is to trim the sample of its minimum and maximum values […]. As information is thrown away, trimmed estimators are [however] less efficient. […] Many semiparametric estimators are not only robust but provide for high ARE with respect to their parametric counterparts. […] The accuracy of an estimate […] and the associated losses will vary from sample to sample. A minimum loss estimator is one that minimizes the losses when the losses are averaged over the set of all possible samples. Thus, its form depends upon all of the following: the loss function, the population from which the sample is drawn, and the population characteristic that is being estimated. An estimate that is optimal in one situation may only exacerbate losses in another. […] It is easy to envision situations in which we are less concerned with the average loss than with the maximum possible loss we may incur by using a particular estimation procedure. An estimate that minimizes the maximum possible loss is termed a mini–max estimator.”

“In survival studies and reliability analyses, we follow each subject and/or experiment unit until either some event occurs or the experiment is terminated; the latter observation is referred to as censored. The principal sources of error are the following:

  • Lack of independence within a sample
  • Lack of independence of censoring
  • Too many censored values
  • Wrong test employed”

“Lack of independence within a sample is often caused by the existence of an implicit factor in the data. For example, if we are measuring survival times for cancer patients, diet may be correlated with survival times. If we do not collect data on the implicit factor(s) (diet in this case), and the implicit factor has an effect on survival times, then we no longer have a sample from a single population. Rather, we have a sample that is a mixture drawn from several populations, one for each level of the implicit factor, each with a different survival distribution. Implicit factors can also affect censoring times, by affecting the probability that a subject will be withdrawn from the study or lost to follow-up. […] Stratification can be used to control for an implicit factor. […] This is similar to using blocking in analysis of variance. […] If the pattern of censoring is not independent of the survival times, then survival estimates may be too high (if subjects who are more ill tend to be withdrawn from the study), or too low (if subjects who will survive longer tend to drop out of the study and are lost to follow-up). If a loss or withdrawal of one subject could increase the probability of loss or withdrawal of other subjects, this would also lead to lack of independence between censoring and the subjects. […] A study may end up with many censored values as a result of having large numbers of subjects withdrawn or lost to follow-up, or from having the study end while many subjects are still alive. Large numbers of censored values decrease the equivalent number of subjects exposed (at risk) at later times, reducing the effective sample sizes. […] Survival tests perform better when the censoring is not too heavy, and, in particular, when the pattern of censoring is similar across the different groups.”

“Kaplan–Meier survival analysis (KMSA) is the appropriate starting point [in the type 2 censoring setting]. KMSA can estimate survival functions even in the presence of censored cases and requires minimal assumptions. If covariates other than time are thought to be important in determining duration to outcome, results reported by KMSA will represent misleading averages, obscuring important differences in groups formed by the covariates (e.g., men vs. women). Since this is often the case, methods that incorporate covariates, such as event-history models and Cox regression, may be preferred. For small samples, the permutation distributions of the Gehan–Breslow, Mantel–Cox, and Tarone–Ware survival test statistics and not the chi-square distribution should be used to compute p-values. If the hazard or survival functions are not parallel, then none of the three tests […] will be particularly good at detecting differences between the survival functions.”

November 1, 2017 Posted by | Books, Statistics | Leave a comment

Molecules

This book is almost exclusively devoted to covering biochemistry topics. When the coverage is decent I find biochemistry reasonably interesting – for example I really liked Beer, Björk & Beardall’s photosynthesis book – and the coverage here was okay, but not more than that. I think that Ball was trying to cover a bit too much ground, or perhaps that there was really too much ground to cover for it to even make sense to try to write a book on this particular topic in a series like this. I learned a lot though.

As usual I’ve added some quotes from the coverage below, as well as some additional links to topics/concepts/people/etc. covered in the book.

“Most atoms on their own are highly reactive – they have a predisposition to join up with other atoms. Molecules are collectives of atoms, firmly welded together into assemblies that may contain anything up to many millions of them. […] By molecules, we generally mean assemblies of a discrete, countable number of atoms. […] Some pure elements adopt molecular forms; others do not. As a rough rule of thumb, metals are non-molecular […] whereas non-metals are molecular. […] molecules are the smallest units of meaning in chemistry. It is through molecules, not atoms, that one can tell stories in the sub-microscopic world. They are the words; atoms are just the letters. […] most words are distinct aggregates of several letters arranged in a particular order. We often find that longer words convey subtler and more finely nuanced meanings. And in molecules, as in words, the order in which the component parts are put together matters: ‘save’ and ‘vase’ do not mean the same thing.”

“There are something like 60,000 different varieties of protein molecule in human cells, each conducting a highly specialized task. It would generally be impossible to guess what this task is merely by looking at a protein. They are undistinguished in appearance, mostly globular in shape […] and composed primarily of carbon, hydrogen, nitrogen, oxygen, and a little sulphur. […] There are twenty varieties of amino acids in natural proteins. In the chain, one amino acid is linked to the next via a covalent bond called a peptide bond. Both molecules shed a few extraneous atoms to make this linkage, and the remainder – another link in the chain – is called a residue. The chain itself is termed a polypeptide. Any string of amino acid residues is a polypeptide. […] In a protein the order of amino acids along the chain – the sequence – is not arbitrary. It is selected […] to ensure that the chain will collapse and curl up in water into the precisely determined globular form of the protein, with all parts of the chain in the right place. This shape can be destroyed by warming the protein, a process called denaturation. But many proteins will fold up again spontaneously into the same globular structure when cooled. In other words, the chain has a kind of memory of its folded shape. The details of this folding process are still not fully understood – it is, in fact, one of the central unsolved puzzles of molecular biology. […] proteins are made not in the [cell] nucleus but in a different compartment called the endoplasmic reticulum […]. The gene is transcribed first into a molecule related to DNA, called RNA (ribonucleic acid). The RNA molecules travel from the nucleus to the endoplasmic reticulum, where they are translated to proteins. The proteins are then shipped off to where they are needed.”

[M]icrofibrils aggregate together in various ways. For example, they can gather in a staggered arrangement to form thick strands called banded fibrils. […] Banded fibrils constitute the connective tissues between cells – they are the cables that hold our flesh together. Bone consists of collagen banded fibrils sprinkled with tiny crystals of the mineral hydroxyapatite, which is basically calcium phosphate. Because of the high protein content of bone, it is flexible and resilient as well as hard. […] In contrast to the disorderly tangle of connective tissue, the eye’s cornea contains collagen fibrils packed side by side in an orderly manner. These fibrils are too small to scatter light, and so the material is virtually transparent. The basic design principle – one that recurs often in nature – is that, by tinkering with the chemical composition and, most importantly, the hierarchical arrangement of the same basic molecules, it is possible to extract several different kinds of material properties. […] cross-links determine the strength of the material: hair and fingernail are more highly cross-linked than skin. Curly or frizzy hair can be straightened by breaking some of [the] sulphur cross-links to make the hairs more pliable. […] Many of the body’s structural fabrics are proteins. Unlike enzymes, structural proteins do not have to conduct any delicate chemistry, but must simply be (for instance) tough, or flexible, or waterproof. In principle many other materials besides proteins would suffice; and indeed, plants use cellulose (a sugar-based polymer) to make their tissues.”

“In many ways, it is metabolism and not replication that provides the best working definition of life. Evolutionary biologists would say that we exist in order to reproduce – but we are not, even the most amorous of us, trying to reproduce all the time. Yet, if we stop metabolizing, even for a minute or two, we are done for. […] Whether waking or asleep, our bodies stay close to a healthy temperature of 37 °C. There is only one way of doing this: our cells are constantly pumping out heat, a by-product of metabolism. Heat is not really the point here – it is simply unavoidable, because all conversion of energy from one form to another squanders some of it this way. Our metabolic processes are primarily about making molecules. Cells cannot survive without constantly reinventing themselves: making new amino acids for proteins, new lipids for membranes, new nucleic acids so that they can divide.”

“In the body, combustion takes place in a tightly controlled, graded sequence of steps, and some chemical energy is drawn off and stored at each stage. […] A power station burns coal, oil, or gas […]. Burning is just a means to an end. The heat is used to turn water into steam; the pressure of the steam drives turbines; the turbines spin and send wire coils whirling in the arms of great magnets, which induces an electrical current in the wire. Energy is passed on, from chemical to heat to mechanical to electrical. And every plant has a barrage of regulatory and safety mechanisms. There are manual checks on pressure gauges and on the structural integrity of moving parts. Automatic sensors make the measurements. Failsafe devices avert catastrophic failure. Energy generation in the cell is every bit as complicated. […] The cell seems to have thought of everything, and has protein devices for fine-tuning it all.”

ATP is the key to the maintenance of cellular integrity and organization, and so the cell puts a great deal of effort into making as much of it as possible from each molecule of glucose that it burns. About 40 per cent of the energy released by the combustion of food is conserved in ATP molecules. ATP is rich in energy because it is like a coiled spring. It contains three phosphate groups, linked like so many train carriages. Each of these phosphate groups has a negative charge; this means that they repel one another. But because they are joined by chemical bonds, they cannot escape one another […]. Straining to get away, the phosphates pull an energetically powerful punch. […] The links between phosphates can be snipped in a reaction that involves water […] called hydrolysis (‘splitting with water’). Each time a bond is hydrolysed, energy is released. Setting free the outermost phosphate converts ATP to adenosine diphosphate (ADP); cleave the second phosphate and it becomes adenosine monophosphate (AMP). Both severances release comparable amounts of energy.”

“Burning sugar is a two-stage process, beginning with its transformation to a molecule called pyruvate in a process known as glycolysis […]. This involves a sequence of ten enzyme-catalysed steps. The first five of these split glucose in half […], powered by the consumption of ATP molecules: two of them are ‘decharged’ to ADP for every glucose molecule split. But the conversion of the fragments to pyruvate […] permits ATP to be recouped from ADP. Four ATP molecules are made this way, so that there is an overall gain of two ATP molecules per glucose molecule consumed. Thus glycolysis charges the cell’s batteries. Pyruvate then normally enters the second stage of the combustion process: the citric acid cycle, which requires oxygen. But if oxygen is scarce – that is, under anaerobic conditions – a contingency plan is enacted whereby pyruvate is instead converted to the molecule lactate. […] The first thing a mitochondrion does is convert pyruvate enzymatically to a molecule called acetyl coenzyme A (CoA). The breakdown of fatty acids and glycerides from fats also eventually generates acetyl CoA. The [citric acid] cycle is a sequence of eight enzyme-catalysed reactions that transform acetyl CoA first to citric acid and then to various other molecules, ending with […] oxaloacetate. This end is a new beginning, for oxaloacetate reacts with acetyl CoA to make citric acid. In some of the steps of the cycle, carbon dioxide is generated as a by-product. It dissolves in the bloodstream and is carried off to the lungs to be exhaled. Thus in effect the carbon in the original glucose molecules is syphoned off into the end product carbon dioxide, completing the combustion process. […] Also syphoned off from the cycle are electrons – crudely speaking, the citric acid cycle sends an electrical current to a different part of the mitochondrion. These electrons are used to convert oxygen molecules and positively charged hydrogen ions to water – an energy-releasing process. The energy is captured and used to make ATP in abundance.”

“While mammalian cells have fuel-burning factories in the form of mitochondria, the solar-power centres in the cells of plant leaves are compartments called chloroplasts […] chloroplast takes carbon dioxide and water, and from them constructs […] sugar. […] In the first part of photosynthesis, light is used to convert NADP to an electron carrier (NADPH) and to transform ADP to ATP. This is effectively a charging-up process that primes the chloroplast for glucose synthesis. In the second part, ATP and NADPH are used to turn carbon dioxide into sugar, in a cyclic sequence of steps called the Calvin–Benson cycle […] There are several similarities between the processes of aerobic metabolism and photosynthesis. Both consist of two distinct sub-processes with separate evolutionary origins: a linear sequence of reactions coupled to a cyclic sequence that regenerates the molecules they both need. The bridge between glycolysis and the citric acid cycle is the electron-ferrying NAD molecule; the two sub-processes of photosynthesis are bridged by the cycling of an almost identical molecule, NAD phosphate (NADP).”

“Despite the variety of messages that hormones convey, the mechanism by which the signal is passed from a receptor protein at the cell surface to the cell’s interior is the same in almost all cases. It involves a sequence of molecular interactions in which molecules transform one another down a relay chain. In cell biology this is called signal transduction. At the same time as relaying the message, these interactions amplify the signal so that the docking of a single hormone molecule to a receptor creates a big response inside the cell. […] The receptor proteins span the entire width of the membrane; the hormone-binding site protrudes on the outer surface, while the base of the receptor emerges from the inner surface […]. When the receptor binds its target hormone, a shape change is transmitted to the lower face of the protein, which enables it to act as an enzyme. […] The participants of all these processes [G protein, guanosine diphosphate and -triphosphate, adenylate cyclase… – figured it didn’t matter if I left out a few details – US…] are stuck to the cell wall. But cAMP floats freely in the cell’s cytoplasm, and is able to carry the signal into the cell interior. It is called a ‘second messenger’, since it is the agent that relays the signal of the ‘first messenger’ (the hormone) into the community of the cell. Cyclic AMP becomes attached to protein molecules called protein kinases, whereupon they in turn become activated as enzymes. Most protein kinases switch other enzymes on and off by attaching phosphate groups to them – a reaction called phosphorylation. […] The process might sound rather complicated, but it is really nothing more than a molecular relay. The signal is passed from the hormone to its receptor, then to the G protein, on to an enzyme and thence to the second messenger, and further on to a protein kinase, and so forth. The G-protein mechanism of signal transduction was discovered in the 1970s by Alfred Gilman and Martin Rodbell, for which they received the 1994 Nobel Prize for medicine. It represents one of the most widespread means of getting a message across a cell membrane. […] it is not just hormonal signalling that makes use of the G-protein mechanism. Our senses of vision and smell, which also involve the transmission of signals, employ the same switching process.”

“Although axon signals are electrical, they differ from those in the metal wires of electronic circuitry. The axon is basically a tubular cell membrane decorated along its length with channels that let sodium and potassium ions in and out. Some of these ion channels are permanently open; others are ‘gated’, opening or closing in response to electrical signals. And some are not really channels at all but pumps, which actively transport sodium ions out of the cell and potassium ions in. These sodium-potassium pumps can move ions […] powered by ATP. […] Drugs that relieve pain typically engage with inhibitory receptors. Morphine, the main active ingredient of opium, binds to so-called opioid receptors in the spinal cord, which inhibit the transmission of pain signals to the brain. There are also opioid receptors in the brain itself, which is why morphine and related opiate drugs have a mental as well as a somatic effect. These receptors in the brain are the binding sites of peptide molecules called endorphins, which the brain produces in response to pain. Some of these are themselves extremely powerful painkillers. […] Not all pain-relieving drugs (analgesics) work by blocking the pain signal. Some prevent the signal from ever being sent. Pain signals are initiated by peptides called prostaglandins, which are manufactured and released by distressed cells. Aspirin (acetylsalicylic acid) latches onto and inhibits one of the enzymes responsible for prostaglandin synthesis, cutting off the cry of pain at its source. Unfortunately, prostaglandins are also responsible for making the mucus that protects the stomach lining […], so one of the side effects of aspirin is the risk of ulcer formation.”

“Shape changes […] are common when a receptor binds its target. If binding alone is the objective, a big shape change is not terribly desirable, since the internal rearrangements of the receptor make heavy weather of the binding event and may make it harder to achieve. This is why many supramolecular hosts are designed so that they are ‘pre-organized’ to receive their guests, minimizing the shape change caused by binding.”

“The way that a protein chain folds up is determined by its amino-acid sequence […] so the ‘information’ for making a protein is uniquely specified by this sequence. DNA encodes this information using […] groups of three bases [to] represent each amino acid. This is the genetic code.* How a particular protein sequence determines the way its chain folds is not yet fully understood. […] Nevertheless, the principle of information flow in the cell is clear. DNA is a manual of information about proteins. We can think of each chromosome as a separate chapter, each gene as a word in that chapter (they are very long words!), and each sequential group of three bases in the gene as a character in the word. Proteins are translations of the words into another language, whose characters are amino acids. In general, only when the genetic language is translated can we understand what it means.”

“It is thought that only about 2–3 per cent of the entire human genome codes for proteins. […] Some people object to genetic engineering on the grounds that it is ethically wrong to tamper with the fundamental material of life – DNA – whether it is in bacteria, humans, tomatoes, or sheep. One can understand such objections, and it would be arrogant to dismiss them as unscientific. Nevertheless, they do sit uneasily with what we now know about the molecular basis of life. The idea that our genetic make-up is sacrosanct looks hard to sustain once we appreciate how contingent, not to say arbitrary, that make-up is. Our genomes are mostly parasite-riddled junk, full of the detritus of over three billion years of evolution.”

Links:

Roald Hoffmann.
Molecular solid.
Covalent bond.
Visible spectrum.
X-ray crystallography.
Electron microscope.
Valence (chemistry).
John Dalton.
Isomer.
Lysozyme.
Organic chemistry.
Synthetic dye industry/Alizarin.
Paul Ehrlich (staining).
Retrosynthetic analysis. [I would have added a link to ‘rational synthesis as well here if there’d been a good article on that topic, but I wasn’t able to find one. Anyway: “Organic chemists call [the] kind of procedure […] in which a starting molecule is converted systematically, bit by bit, to the desired product […] a rational synthesis.”]
Paclitaxel synthesis.
Protein.
Enzyme.
Tryptophan synthase.
Ubiquitin.
Amino acid.
Protein folding.
Peptide bond.
Hydrogen bond.
Nucleotide.
Chromosome.
Structural gene. Regulatory gene.
Operon.
Gregor Mendel.
Mitochondrial DNA.
RNA world.
Ribozyme.
Artificial gene synthesis.
Keratin.
Silk.
Vulcanization.
Aramid.
Microtubule.
Tubulin.
Carbon nanotube.
Amylase/pepsin/glycogen/insulin.
Cytochrome c oxidase.
ATP synthase.
Haemoglobin.
Thylakoid membrane.
Chlorophyll.
Liposome.
TNT.
Motor protein. Dynein. Kinesin.
Sarcomere.
Sliding filament theory of muscle action.
Photoisomerization.
Supramolecular chemistry.
Hormone. Endocrine system.
Neurotransmitter.
Ionophore.
DNA.
Mutation.
Intron. Exon.
Transposon.
Molecular electronics.

October 30, 2017 Posted by | Biology, Books, Botany, Chemistry, Genetics, Neurology, Pharmacology | Leave a comment

Child psychology

I was not impressed with this book, but as mentioned in the short review it was ‘not completely devoid of observations of interest’.

Before I start my proper coverage of the book, here are some related ‘observations’ from a different book I recently read, Bellwether:

““First we’re all going to play a game. Bethany, it’s Brittany’s birthday.” She attempted a game involving balloons with pink Barbies on them and then gave up and let Brittany open her presents. “Open Sandy’s first,” Gina said, handing her the book.
“No, Caitlin, these are Brittany’s presents.”
Brittany ripped the paper off Toads and Diamonds and looked at it blankly.
“That was my favorite fairy tale when I was little,” I said. “It’s about a girl who meets a good fairy, only she doesn’t, know it because the fairy’s in disguise—” but Brittany had already tossed it aside and was ripping open a Barbie doll in a glittery dress.
“Totally Hair Barbie!” she shrieked.
“Mine,” Peyton said, and made a grab that left Brittany holding nothing but Barbie’s arm.
“She broke Totally Hair Barbie!” Brittany wailed.
Peyton’s mother stood up and said calmly, “Peyton, I think you need a time-out.”
I thought Peyton needed a good swat, or at least to have Totally Hair Barbie taken away from her and given back to Brittany, but instead her mother led her to the door of Gina’s bedroom. “You can come out when you’re in control of your feelings,” she said to Peyton, who looked like she was in control to me.
“I can’t believe you’re still using time-outs,” Chelsea’s mother said. “Everybody’s using holding now.”
“Holding?” I asked.
“You hold the child immobile on your lap until the negative behavior stops. It produces a feeling of interceptive safety.”
“Really,” I said, looking toward the bedroom door. I would have hated trying to hold Peyton against her will.
“Holding’s been totally abandoned,” Lindsay’s mother said. “We use EE.”
“EE?” I said.
“Esteem Enhancement,” Lindsay’s mother said. “EE addresses the positive peripheral behavior no matter how negative the primary behavior is.”
“Positive peripheral behavior?” Gina said dubiously. “When Peyton took the Barbie away from Brittany just now,” Lindsay’s mother said, obviously delighted to explain, “you would have said, ‘My, Peyton, what an assertive grip you have.’”

[A little while later, during the same party:]

“My, Peyton,” Lindsay’s mother said, “what a creative thing to do with your frozen yogurt.””

Okay, on to the coverage of the book. I haven’t covered it in much detail, but I have included some observations of interest below.

“[O]ptimal development of grammar (knowledge about language structure) and phonology (knowledge about the sound elements in words) depends on the brain experiencing sufficient linguistic input. So quantity of language matters. The quality of the language used with young children is also important. The easiest way to extend the quality of language is with interactions around books. […] Natural conversations, focused on real events in the here and now, are those which are critical for optimal development. Despite this evidence, just talking to young children is still not valued strongly in many environments. Some studies find that over 60 per cent of utterances to young children are ‘empty language’ — phrases such as ‘stop that’, ‘don’t go there’, and ‘leave that alone’. […] studies of children who experience high levels of such ‘restricted language’ reveal a negative impact on later cognitive, social, and academic development.”

[Neural] plasticity is largely achieved by the brain growing connections between brain cells that are already there. Any environmental input will cause new connections to form. At the same time, connections that are not used much will be pruned. […] the consistency of what is experienced will be important in determining which connections are pruned and which are retained. […] Brains whose biology makes them less efficient in particular and measurable aspects of processing seem to be at risk in specific areas of development. For example, when auditory processing is less efficient, this can carry a risk of later language impairment.”

“Joint attention has […] been suggested to be the basis of ‘natural pedagogy’ — a social learning system for imparting cultural knowledge. Once attention is shared by adult and infant on an object, an interaction around that object can begin. That interaction usually passes knowledge from carer to child. This is an example of responsive contingency in action — the infant shows an interest in something, the carer responds, and there is an interaction which enables learning. Taking the child’s focus of attention as the starting point for the interaction is very important for effective learning. Of course, skilled carers can also engineer situations in which babies or children will become interested in certain objects. This is the basis of effective play-centred learning. Novel toys or objects are always interesting.”

“Some research suggests that the pitch and amplitude (loudness) of a baby’s cry has been developed by evolution to prompt immediate action by adults. Babies’ cries appear to be designed to be maximally stressful to hear.”

“[T]he important factors in becoming a ‘preferred attachment figure’ are proximity and consistency.”

“[A]dults modify their actions in important ways when they interact with infants. These modifications appear to facilitate learning. ‘Infant-directed action’ is characterized by greater enthusiasm, closer proximity to the infant, greater repetitiveness, and longer gaze to the face than interactions with another adult. Infant-directed action also uses simplified actions with more turn-taking. […] carers tend to use a special tone of voice to talk to babies. This is more sing-song and attention-grabbing than normal conversational speech, and is called ‘infant-directed speech’ [IDS] or ‘Parentese’. All adults and children naturally adopt this special tone when talking to a baby, and babies prefer to listen to Parentese. […] IDS […] heightens pitch, exaggerates the length of words, and uses extra stress, exaggerating the rhythmic or prosodic aspects of speech. […] the heightened prosody increases the salience of acoustic cues to where words begin and end. […] So as well as capturing attention, IDS is emphasizing key linguistic cues that help language acquisition. […] The infant brain seems to cope with the ‘learning problem’ of which sounds matter by initially being sensitive to all the sound elements used by the different world languages. Via acoustic learning during the first year of life, the brain then specializes in the sounds that matter for the particular languages that it is being exposed to.”

“While crawling makes it difficult to carry objects with you on your travels, learning to walk enables babies to carry things. Indeed, walking babies spend most of their time selecting objects and taking them to show their carer, spending on average 30–40 minutes per waking hour interacting with objects. […] Self-generated movement is seen as critical for child development. […] most falling is adaptive, as it helps infants to gain expertise. Indeed, studies show that newly walking infants fall on average 17 times per hour. From the perspective of child psychology, the importance of ‘motor milestones’ like crawling and walking is that they enable greater agency (self-initiated and self-chosen behaviour) on the part of the baby.”

“Statistical learning enables the brain to learn the statistical structure of any event or object. […] Statistical structure is learned in all sensory modalities simultaneously. For example, as the child learns about birds, the child will learn that light body weight, having feathers, having wings, having a beak, singing, and flying, all go together. Each bird that the child sees may be different, but each bird will share the features of flying, having feathers, having wings, and so on. […] The connections that form between the different brain cells that are activated by hearing, seeing, and feeling birds will be repeatedly strengthened for these shared features, thereby creating a multi-modal neural network for that particular concept. The development of this network will be dependent on everyday experiences, and the networks will be richer if the experiences are more varied. This principle of learning supports the use of multi-modal instruction and active experience in nursery and primary school. […] knowledge about concepts is distributed across the entire brain. It is not stored separately in a kind of conceptual ‘dictionary’ or distinct knowledge system. Multi-modal experiences strengthen learning across the whole brain. Accordingly, multisensory learning is the most effective kind of learning for young children.”

“Babies learn words most quickly when an adult both points to and names a new item.”

“…direct teaching of scientific reasoning skills helps children to reason logically independently of their pre-existing beliefs. This is more difficult than it sounds, as pre-existing beliefs exert strong effects. […] in many social situations we are advantaged if we reason on the basis of our pre-existing beliefs. This is one reason that stereotypes form”. [Do remember on a related note that stereotype accuracy is one of the largest and most replicable effects in all of social psychology – US].

“Some gestures have almost universal meaning, like waving goodbye. Babies begin using gestures like this quite early on. Between 10 and 18 months of age, gestures become frequent and are used extensively for communication. […] After around 18 months, the use of gesture starts declining, as vocalization becomes more and more dominant in communication. […] By [that time], most children are entering the two-word stage, when they become able to combine words. […] At this age, children often use a word that they know to refer to many different entities whose names are not yet known. They might use the word ‘bee’ for insects that are not bees, or the word ‘dog’ to refer to horses and cows. Experiments have shown that this is not a semantic confusion. Toddlers do not think that horses and cows are a type of dog. Rather, they have limited language capacities, and so they stretch their limited vocabularies to communicate as flexibly as possible. […] there is a lot of similarity across cultures at the two-word stage regarding which words are combined. Young children combine words to draw attention to objects (‘See doggie!’), to indicate ownership (‘My shoe’), to point out properties of objects (‘Big doggie’), to indicate plurality (‘Two cookie’), and to indicate recurrence (‘Other cookie’). […] It is only as children learn grammar that some divergence is found across languages. This is probably because different languages have different grammatical formats for combining words. […] grammatical learning emerges naturally from extensive language experience (of the utterances of others) and from language use (the novel utterances of the child, which are re-formulated by conversational partners if they are grammatically incorrect).”

“The social and communicative functions of language, and children’s understanding of them, are captured by pragmatics. […] pragmatic aspects of conversation include taking turns, and making sure that the other person has sufficient knowledge of the events being discussed to follow what you are saying. […] To learn about pragmatics, children need to go beyond the literal meaning of the words and make inferences about communicative intent. A conversation is successful when a child has recognized the type of social situation and applied the appropriate formula. […] Children with autism, who have difficulties with social cognition and in reading the mental states of others, find learning the pragmatics of conversation particularly difficult. […] Children with autism often show profound delays in social understanding and do not ‘get’ many social norms. These children may behave quite inappropriately in social settings […] Children with autism may also show very delayed understanding of emotions and of intentions. However, this does not make them anti-social, rather it makes them relatively ineffective at being pro-social.”

“When children have siblings, there are usually developmental advantages for social cognition and psychological understanding. […] Discussing the causes of disputes appears to be particularly important for developing social understanding. Young children need opportunities to ask questions, argue with explanations, and reflect on why other people behave in the way that they do. […] Families that do not talk about the intentions and emotions of others and that do not explicitly discuss social norms will create children with reduced social understanding.”

“[C]hildren, like adults, are more likely to act in pro-social ways to ingroup members. […] Social learning of cultural ‘ingroups’ appears to develop early in children as part of general socio-moral development. […] being loyal to one’s ‘ingroup’ is likely to make the child more popular with the other members of that group. Being in a group thus requires the development of knowledge about how to be loyal, about conforming to pressure and about showing ingroup bias. For example, children may need to make fine judgements about who is more popular within the group, so that they can favour friends who are more likely to be popular with the rest of the group. […] even children as young as 6 years will show more positive responding to the transgression of social rules by ingroup members compared to outgroup members, particularly if they have relatively well-developed understanding of emotions and intentions.”

“Good language skills improve memory, because children with better language skills are able to construct narratively coherent and extended, temporally organized representations of experienced events.”

“Once children begin reading, […] letter-sound knowledge and ‘phonemic awareness’ (the ability to divide words into the single sound elements represented by letters) become the most important predictors of reading development. […] phonemic awareness largely develops as a consequence of being taught to read and write. Research shows that illiterate adults do not have phonemic awareness. […] brain imaging shows that learning to read ‘re-maps’ phonology in the brain. We begin to hear words as sequences of ‘phonemes’ only after we learn to read.”

October 29, 2017 Posted by | Books, Language, Neurology, Psychology | Leave a comment

Quotes

i. “…when a gift is deserved, it is not a gift but a payment.” (Gene Wolfe, The shadow of the torturer)

ii. “All the greatest blessings are a source of anxiety, and at no time is fortune less wisely trusted than when it is best […] everything that comes to us from chance is unstable, and the higher it rises, the more liable it is to fall.” (Seneca the Younger, On the shortness of life)

iii. “Debunking bad science should be constant obligation of the science community, even if it takes time away from serious research or seems to be a losing battle.” (Martin Gardner)

iv. “Happy is he that grows wise by other men’s harms.” (James Howell)

v. “The deadliest foe to virtue would be complete self-knowledge.” (F. H. Bradley)

vi. “A good book is never exhausted. It goes on whispering to you from the wall.” (Anatole Broyard)

vii. “The great writers of aphorisms read as if they had all known each other very well.” (Elias Canetti)

viii. “The story of your youth must not turn into a catalog of what became important in your later life. It must also contain the dissipation, the failure, and the waste.” (-ll-)

ix. “You keep taking note of whatever confirms your ideas — better to write down what refutes and weakens them!” (-ll-)

x. “Windbags can be right. Aphorists can be wrong. It is a tough world.” (James Fenton)

xi. “Science should be distinguished from technique and its scientific instrumentation, technology. Science is practised by scientists, and techniques by ‘engineers’ — a term that in our terminology includes physicians, lawyers, and teachers. If for the scientist knowledge and cognition are primary, it is action and construction that characterises the work of the engineer, though in fact his activity may be based on science. In history, technique often preceded science.” (Hans Freudenthal)

xii. “There are some books which cannot be adequately reviewed for twenty or thirty years after they come out.” (John Morley)

xiii. “Success depends on three things: who says it, what he says, how he says it; and of these three things, what he says is the least important.” (-ll-)

xiv. “Every uneducated person is a caricature of himself.” (Karl Wilhelm Friedrich Schlegel)

xv. “It is surely one of the strangest of our propensities to mark out those we love best for the worst usage; yet we do, all of us. We can take any freedom with a friend; we stand on no ceremony with a friend.” (Samuel Laman Blanchard)

xvi. “Everybody’s word is worth Nobody’s taking.” (-ll-)

xvii. “Credulity lives next door to Gossip.” (-ll-)

xviii. “As success converts treason into legitimacy, so belief converts fiction into fact” (-ll-)

xix. “In academia much bogus knowledge is tolerated in the name of academic freedom – which is like allowing for the sale of contaminated food in the name of free enterprise. I submit that such tolerance is suicidal: that the serious students must be protected against the “anything goes” crowd.” (Mario Bunge)

xx. “At all times pseudoprofound aphorisms have been more popular than rigorous arguments.” (-ll-)

October 28, 2017 Posted by | Books, Quotes/aphorisms | Leave a comment