Econstudentlog

Nuclear Power (II)

This is my second and last post about the book. Some more links and quotes below.

“Many of the currently operating reactors were built in the late 1960s and 1970s. With a global hiatus on nuclear reactor construction following the Three Mile Island incident and the Chernobyl disaster, there is a dearth of nuclear power replacement capacity as the present fleet faces decommissioning. Nuclear power stations, like coal-, gas-, and oil-fired stations, produce heat to generate electricity and all require water for cooling. The US Geological Survey estimates that this use of water for cooling power stations accounts for over 3% of all water consumption. Most nuclear power plants are built close to the sea so that the ocean can be used as a heat dump. […] The need for such large quantities of water inhibits the use of nuclear power in arid regions of the world. […] The higher the operating temperature, the greater the water usage. […] [L]arge coal, gas and nuclear plants […] can consume millions of litres per hour”.

“A nuclear reactor is utilizing the strength of the force between nucleons while hydrocarbon burning is relying on the chemical bonding between molecules. Since the nuclear bonding is of the order of a million times stronger than the chemical bonding, the mass of hydrocarbon fuel necessary to produce a given amount of energy is about a million times greater than the equivalent mass of nuclear fuel. Thus, while a coal station might burn millions of tonnes of coal per year, a nuclear station with the same power output might consume a few tonnes.”

“There are a number of reasons why one might wish to reprocess the spent nuclear fuel. These include: to produce plutonium either for nuclear weapons or, increasingly, as a fuel-component for fast reactors; the recycling of all actinides for fast-breeder reactors, closing the nuclear fuel cycle, greatly increasing the energy extracted from natural uranium; the recycling of plutonium in order to produce mixed oxide fuels for thermal reactors; recovering enriched uranium from spent fuel to be recycled through thermal reactors; to extract expensive isotopes which are of value to medicine, agriculture, and industry. An integral part of this process is the management of the radioactive waste. Currently 40% of all nuclear fuel is obtained by reprocessing. […] The La Hague site is the largest reprocessing site in the world, with over half the global capacity at 1,700 tonnes of spent fuel per year. […] The world’s largest user of nuclear power, the USA, currently does not reprocess its fuel and hence produces [large] quantities of radioactive waste. […] The principal reprocessors of radioactive waste are France and the UK. Both countries receive material from other countries and after reprocessing return the raffinate to the country of origin for final disposition.”

“Nearly 45,000 tonnes of uranium are mined annually. More than half comes from the three largest producers, Canada, Kazakhstan, and Australia.”

“The designs of nuclear installations are required to be passed by national nuclear licensing agencies. These include strict safety and security features. The international standard for the integrity of a nuclear power plant is that it would withstand the crash of a Boeing 747 Jumbo Jet without the release of hazardous radiation beyond the site boundary. […] At Fukushima, the design was to current safety standards, taking into account the possibility of a severe earthquake; what had not been allowed for was the simultaneous tsunami strike.”

“The costing of nuclear power is notoriously controversial. Opponents point to the past large investments made in nuclear research and would like to factor this into the cost. There are always arguments about whether or not decommissioning costs and waste-management costs have been properly accounted for. […] which electricity source is most economical will vary from country to country […]. As with all industrial processes, there can be economies of scale. In the USA, and particularly in the UK, these economies of scale were never fully realized. In the UK, while several Magnox and AGR reactors were built, no two were of exactly the same design, resulting in no economies in construction costs, component manufacture, or staff training programmes. The issue is compounded by the high cost of licensing new designs. […] in France, the Regulatory Commission agreed a standard design for all plants and used a safety engineering process similar to that used for licensing aircraft. Public debate was thereafter restricted to local site issues. Economies of scale were achieved.”

“[C]onstruction costs […] are the largest single factor in the cost of nuclear electricity generation. […] Because the raw fuel is such a small fraction of the cost of nuclear power generation, the cost of electricity is not very sensitive to the cost of uranium, unlike the fossil fuels, for which fuel can represent up to 70% of the cost. Operating costs for nuclear plants have fallen dramatically as the French practice of standardization of design has spread. […] Generation III+ reactors are claimed to be half the size and capable of being built in much shorter times than the traditional PWRs. The 2008 contracted capital cost of building new plants containing two AP1000 reactors in the USA is around $10–$14billion, […] There is considerable experience of decommissioning of nuclear plants. In the USA, the cost of decommissioning a power plant is approximately $350 million. […] In France and Sweden, decommissioning costs are estimated to be 10–15% of construction costs and are included in the price charged for electricity. […] The UK has by far the highest estimates for decommissioning which are set at £1 billion per reactor. This exceptionally high figure is in part due to the much larger reactor core associated with graphite moderated piles. […] It is clear that in many countries nuclear-generated electricity is commercially competitive with fossil fuels despite the need to include the cost of capital and all waste disposal and decommissioning (factors that are not normally included for other fuels). […] At the present time, without the market of taxes and grants, electricity generated from renewable sources is generally more expensive than that from nuclear power or fossil fuels. This leaves the question: if nuclear power is so competitive, why is there not a global rush to build new nuclear power stations? The answer lies in the time taken to recoup investments. Investors in a new gas-fired power station can expect to recover their investment within 15 years. Because of the high capital start-up costs, nuclear power stations yield a slower rate of return, even though over the lifetime of the plant the return may be greater.”

“Throughout the 20th century, the population and GDP growth combined to drive the [global] demand for energy to increase at a rate of 4% per annum […]. The most conservative estimate is that the demand for energy will see global energy requirements double between 2000 and 2050. […] The demand for electricity is growing at twice the rate of the demand for energy. […] More than two-thirds of all electricity is generated by burning fossil fuels. […] The most rapidly growing renewable source of electricity generation is wind power […] wind is an intermittent source of electricity. […] The intermittency of wind power leads to [a] problem. The grid management has to supply a steady flow of electricity. Intermittency requires a heavy overhead on grid management, and there are serious concerns about the ability of national grids to cope with more than a 20% contribution from wind power. […] As for the other renewables, solar and geothermal power, significant electricity generation will be restricted to latitudes 40°S to 40°N and regions of suitable geological structures, respectively. Solar power and geothermal power are expected to increase but will remain a small fraction of the total electricity supply. […] In most industrialized nations, the current electricity supply is via a regional, national, or international grid. The electricity is generated in large (~1GW) power stations. This is a highly efficient means of electricity generation and distribution. If the renewable sources of electricity generation are to become significant, then a major restructuring of the distribution infrastructure will be necessary. While local ‘microgeneration’ can have significant benefits for small communities, it is not practical for the large-scale needs of big industrial cities in which most of the world’s population live.”

“Electricity cannot be stored in large quantities. If the installed generating capacity is designed to meet peak demand, there will be periods when the full capacity is not required. In most industrial countries, the average demand is only about one-third of peak consumption.”

Links:

Nuclear reprocessing. La Hague site. Radioactive waste. Yucca Mountain nuclear waste repository.
Bismuth phosphate process.
Nuclear decommissioning.
Uranium mining. Open-pit mining.
Wigner effect (Wigner heating). Windscale fire. Three Mile Island accident. Chernobyl disaster. Fukushima Daiichi nuclear disaster.
Fail-safe (engineering).
Treaty on the Non-Proliferation of Nuclear Weapons.
Economics of nuclear power plants.
Fusion power. Tokamak. ITER. High Power laser Energy Research facility (HiPER).
Properties of plasma.
Klystron.
World energy consumption by fuel source. Renewable energy.

 

Advertisements

December 16, 2017 Posted by | Books, Chemistry, Economics, Engineering, Physics | Leave a comment

Occupational Epidemiology (III)

This will be my last post about the book.

Some observations from the final chapters:

“Often there is confusion about the difference between systematic reviews and metaanalyses. A meta-analysis is a quantitative synthesis of two or more studies […] A systematic review is a synthesis of evidence on the effects of an intervention or an exposure which may also include a meta-analysis, but this is not a prerequisite. It may be that the results of the studies which have been included in a systematic review are reported in such a way that it is impossible to synthesize them quantitatively. They can then be reported in a narrative manner.10 However, a meta-analysis always requires a systematic review of the literature. […] There is a long history of debate about the value of meta-analysis for occupational cohort studies or other occupational aetiological studies. In 1994, Shapiro argued that ‘meta-analysis of published non-experimental data should be abandoned’. He reasoned that ‘relative risks of low magnitude (say, less than 2) are virtually beyond the resolving power of the epidemiological microscope because we can seldom demonstrably eliminate all sources of bias’.13 Because the pooling of studies in a meta-analysis increases statistical power, the pooled estimate may easily become significant and thus incorrectly taken as an indication of causality, even though the biases in the included studies may not have been taken into account. Others have argued that the method of meta-analysis is important but should be applied appropriately, taking into account the biases in individual studies.14 […] We believe that the synthesis of aetiological studies should be based on the same general principles as for intervention studies, and the existing methods adapted to the particular challenges of cohort and case-control studies. […] Since 2004, there is a special entity, the Cochrane Occupational Safety and Health Review Group, that is responsible for the preparing and updating of reviews of occupational safety and health interventions […]. There were over 100 systematic reviews on these topics in the Cochrane Library in 2012.”

“The believability of a systematic review’s results depends largely on the quality of the included studies. Therefore, assessing and reporting on the quality of the included studies is important. For intervention studies, randomized trials are regarded as of higher quality than observational studies, and the conduct of the study (e.g. in terms of response rate or completeness of follow-up) also influences quality. A conclusion derived from a few high-quality studies will be more reliable than when the conclusion is based on even a large number of low-quality studies. Some form of quality assessment is nowadays commonplace in intervention reviews but is still often missing in reviews of aetiological studies. […] It is tempting to use quality scores, such as the Jadad scale for RCTs34 and the Downs and Black scale for non-RCT intervention studies35 but these, in their original format, are insensitive to variation in the importance of risk areas for a given research question. The score system may give the same value to two studies (say, 10 out of 12) when one, for example, lacked blinding and the other did not randomize, thus implying that their quality is equal. This would not be a problem if randomization and blinding were equally important for all questions in all reviews, but this is not the case. For RCTs an important development in this regard has been the Cochrane risk of bias tool.36 This is a checklist of six important domains that have been shown to be important areas of bias in RCTs: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, and selective reporting.”

“[R]isks of bias tools developed for intervention studies cannot be used for reviews of aetiological studies without relevant modification. This is because, unlike interventions, exposures are usually more complicated to assess when we want to attribute the outcome to them alone. These scales do not cover all items that may need assessment in an aetiological study, such as confounding and information bias relating to exposures. […] Surprisingly little methodological work has been done to develop validated tools for aetiological epidemiology and most tools in use are not validated,38 […] Two separate checklists, for observational studies of incidence and prevalence and for risk factor assessment, have been developed and validated recently.40 […] Publication and other reporting bias is probably a much bigger issue for aetiological studies than for intervention studies. This is because, for clinical trials, the introduction of protocol registration, coupled with the regulatory system for new medications, has helped in assessing and preventing publication and reporting bias. No such checks exist for observational studies.”

“Most ill health that arises from occupational exposures can also arise from nonoccupational exposures, and the same type of exposure can occur in occupational and non-occupational settings. With the exception of malignant mesothelioma (which is essentially only caused by exposure to asbestos), there is no way to determine which exposure caused a particular disorder, nor where the causative exposure occurred. This means that usually it is not possible to determine the burden just by counting the number of cases. Instead, approaches to estimating this burden have been developed. There are also several ways to define burden and how best to measure it.”

“The population attributable fraction (PAF) is the proportion of cases that would not have occurred in the absence of an occupational exposure. It can be estimated by combining two measures — a risk estimate (usually relative risk (RR) or odds ratio) of the disorder of interest that is associated with exposure to the substance of concern; and an estimate of the proportion of the population exposed to the substance at work (p(E)). This approach has been used in several studies, particularly for estimating cancer burden […] There are several possible equations that can be used to calculate the PAF, depending on the available data […] PAFs cannot in general be combined by summing directly because: (1) summing PAFs for overlapping exposures (i.e. agents to which the same ‘ever exposed’ workers may have been exposed) may give an overall PAF exceeding 100%, and (2) summing disjoint (not concurrently occurring) exposures also introduces upward bias. Strategies to avoid this include partitioning exposed numbers between overlapping exposures […] or estimating only for the ‘dominant’ carcinogen with the highest risk. Where multiple exposures remain, one approach is to assume that the exposures are independent and their joint effects are multiplicative. The PAFs can then be combined to give an overall PAF for that cancer using a product sum. […] Potential sources of bias for PAFs include inappropriate choice of risk estimates, imprecision in the risk estimates and estimates of proportions exposed, inaccurate risk exposure period and latency assumptions, and a lack of separate risk estimates in some cases for women and/or cancer incidence. In addition, a key decision is the choice of which diseases and exposures are to be included.”

“The British Cancer Burden study is perhaps the most detailed study of occupationally related cancers in that it includes all those relevant carcinogens classified at the end of 2008 […] In the British study the attributable fractions ranged from less than 0.01% to 95% overall, the most important cancer sites for occupational attribution being, for men, mesothelioma (97%), sinonasal (46%), lung (21.1%), bladder (7.1%), and non-melanoma skin cancer (7.1%) and, for women, mesothelioma (83%), sinonasal (20.1%), lung (5.3%), breast (4.6%), and nasopharynx (2.5%). Occupation also contributed 2% or more overall to cancers of the larynx, oesophagus, and stomach, and soft tissue sarcoma with, in addition for men, melanoma of the eye (due to welding), and non-Hodgkin lymphoma. […] The overall results from the occupational risk factors component of the Global Burden of Disease 2010 study illustrate several important aspects of burden studies.14 Of the estimated 850 000 occupationally related deaths worldwide, the top three causes were: (1) injuries (just over a half of all deaths); (2) particulate matter, gases, and fumes leading to COPD; and (3) carcinogens. When DALYs were used as the burden measure, injuries still accounted for the highest proportion (just over one-third), but ergonomic factors leading to low back pain resulted in almost as many DALYs, and both were almost an order of magnitude higher than the DALYs from carcinogens. The difference in relative contributions of the various risk factors between deaths and DALYs arises because of the varying ages of those affected, and the differing chronicity of the resulting conditions. Both measures are valid, but they represent a different aspect of the burden arising from the hazardous exposures […]. Both the British and Global Burden of Disease studies draw attention to the important issues of: (1) multiple occupational carcinogens causing specific types of cancer, for example, the British study evaluated 21 lung carcinogens; and (2) specific carcinogens causing several different cancers, for example, IARC now defines asbestos as a group 1 or 2A carcinogen for seven cancer sites. These issues require careful consideration for burden estimation and for prioritizing risk reduction strategies. […] The long latency of many cancers means that estimates of current burden are based on exposures occurring in the past, often much higher than those existing today. […] long latency [also] means that risk reduction measures taken now will take a considerable time to be reflected in reduced disease incidence.”

“Exposures and effects are linked by dynamic processes occurring across time. These processes can often be usefully decomposed into two distinct biological relationships, each with several components: 1. The exposure-dose relationship […] 2. The dose-effect relationship […] These two component relationships are sometimes represented by two different mathematical models: a toxicokinetic model […], and a disease process model […]. Depending on the information available, these models may be relatively simple or highly complex. […] Often the various steps in the disease process do not occur at the same rate, some of these processes are ‘fast’, such as cell killing, while others are ‘slow’, such as damage repair. Frequently a few slow steps in a process become limiting to the overall rate, which sets the temporal pattern for the entire exposure-response relationship. […] It is not necessary to know the full mechanism of effects to guide selection of an exposure-response model or exposure metric. Because of the strong influence of the rate-limiting steps, often it is only necessary to have observations on the approximate time course of effects. This is true whether the effects appear to be reversible or irreversible, and whether damage progresses proportionately with each unit of exposure (actually dose) or instead occurs suddenly, and seemingly without regard to the amount of exposure, such as an asthma attack.”

“In this chapter, we argue that formal disease process models have the potential to improve the sensitivity of epidemiology for detecting new and emerging occupational and environmental risks where there is limited mechanistic information. […] In our approach, these models are often used to create exposure or dose metrics, which are in turn used in epidemiological models to estimate exposure-disease associations. […] Our goal is a methodology to formulate strong tests of our exposure-disease hypotheses in which a hypothesis is developed in as much biological detail as it can be, expressed in a suitable dynamic (temporal) model, and tested by its fit with a rich data set, so that its flaws and misperceptions of reality are fully displayed. Rejecting such a fully developed biological hypothesis is more informative than either rejecting or failing to reject a generic or vaguely defined hypothesis.” For example, the hypothesis ‘truck drivers have more risk of lung cancer than non-drivers’13 is of limited usefulness for prevention […]. Hypothesizing that a particular chemical agent in truck exhaust is associated with lung cancer — whether the hypothesis is refuted or supported by data — is more likely to lead to successful prevention activities. […] we believe that the choice of models against which to compare the data should, so far as possible, be guided by explicit hypotheses about the underlying biological processes. In other words, you can get as much as possible from epidemiology by starting from well-thought-out hypotheses that are formalized as mathematical models into which the data will be placed. The disease process models can serve this purpose.2″

“The basic idea of empirical Bayes (EB) and semiBayes (SB) adjustments for multiple associations is that the observed variation of the estimated relative risks around their geometric mean is larger than the variation of the true (but unknown) relative risks. In SB adjustments, an a priori value for the extra variation is chosen which assigns a reasonable range of variation to the true relative risks and this value is then used to adjust the observed relative risks.7 The adjustment consists in shrinking outlying relative risks towards the overall mean (of the relative risks for all the different exposures being considered). The larger the individual variance of the relative risks, the stronger the shrinkage, so that the shrinkage is stronger for less reliable estimates based on small numbers. Typical applications in which SB adjustments are a useful alternative to traditional methods of adjustment for multiple comparisons are in large occupational surveillance studies, where many relative risks are estimated with few or no a priori beliefs about which associations might be causal.7″

“The advantage of [the SB adjustment] approach over classical Bonferroni corrections is that on the average it produces more valid estimates of the odds ratio for each occupation/exposure. If we do a study which involves assessing hundreds of occupations, the problem is not only that we get many ‘false positive’ results by chance. A second problem is that even the ‘true positives’ tend to have odds ratios that are too high. For example, if we have a group of occupations with true odds ratios around 1.5, then the ones that stand out in the analysis are those with the highest odds ratios (e.g. 2.5) which will be elevated partly because of real effects and partly by chance. The Bonferroni correction addresses the first problem (too many chance findings) but not the second, that the strongest odds ratios are probably too high. In contrast, SB adjustment addresses the second problem by correcting for the anticipated regression to the mean that would have occurred if the study had been repeated, and thereby on the average produces more valid odds ratio estimates for each occupation/exposure. […] most epidemiologists write their Methods and Results sections as frequentists and their Introduction and Discussion sections as Bayesians. In their Methods and Results sections, they ‘test’ their findings as if their data are the only data that exist. In the Introduction and Discussion, they discuss their findings with regard to their consistency with previous studies, as well as other issues such as biological plausibility. This creates tensions when a small study has findings which are not statistically significant but which are consistent with prior knowledge, or when a study finds statistically significant findings which are inconsistent with prior knowledge. […] In some (but not all) instances, things can be made clearer if we include Bayesian methods formally in the Methods and Results sections of our papers”.

“In epidemiology, risk is most often quantified in terms of relative risk — i.e. the ratio of the probability of an adverse outcome in someone with a specified exposure to that in someone who is unexposed, or exposed at a different specified level. […] Relative risks can be estimated from a wider range of study designs than individual attributable risks. They have the advantage that they are often stable across different groups of people (e.g. of different ages, smokers, and non-smokers) which makes them easier to estimate and quantify. Moreover, high relative risks are generally unlikely to be explained by unrecognized bias or confounding. […] However, individual attributable risks are a more relevant measure by which to quantify the impact of decisions in risk management on individuals. […] Individual attributable risk is the difference in the probability of an adverse outcome between someone with a specified exposure and someone who is unexposed, or exposed at a different specified level. It is the critical measure when considering the impact of decisions in risk management on individuals. […] Population attributable risk is the difference in the frequency of an adverse outcome between a population with a given distribution of exposures to a hazardous agent, and that in a population with no exposure, or some other specified distribution of exposures. It depends on the prevalence of exposure at different levels within the population, and on the individual attributable risk for each level of exposure. It is a measure of the impact of the agent at a population level, and is relevant to decisions in risk management for populations. […] Population attributable risks are highest when a high proportion of a population is exposed at levels which carry high individual attributable risks. On the other hand, an exposure which carries a high individual attributable risk may produce only a small population attributable risk if the prevalence of such exposure is low.”

“Hazard characterization entails quantification of risks in relation to routes, levels, and durations of exposure. […] The findings from individual studies are often used to determine a no observed adverse effect level (NOAEL), lowest observed effect level (LOEL), or benchmark dose lower 95% confidence limit (BMDL) for relevant effects […] [NOAEL] is the highest dose or exposure concentration at which there is no discernible adverse effect. […] [LOEL] is the lowest dose or exposure concentration at which a discernible effect is observed. If comparison with unexposed controls indicates adverse effects at all of the dose levels in an experiment, a NOAEL cannot be derived, but the lowest dose constitutes a LOEL, which might be used as a comparator for estimated exposures or to derive a toxicological reference value […] A BMDL is defined in relation to a specified adverse outcome that is observed in a study. Usually, this is the outcome which occurs at the lowest levels of exposure and which is considered critical to the assessment of risk. Statistical modelling is applied to the experimental data to estimate the dose or exposure concentration which produces a specified small level of effect […]. The BMDL is the lower 95% confidence limit for this estimate. As such, it depends both on the toxicity of the test chemical […], and also on the sample sizes used in the study (other things being equal, larger sample sizes will produce more precise estimates, and therefore higher BMDLs). In addition to accounting for sample size, BMDLs have the merit that they exploit all of the data points in a study, and do not depend so critically on the spacing of doses that is adopted in the experimental design (by definition a NOAEL or LOEL can only be at one of the limited number of dose levels used in the experiment). On the other hand, BMDLs can only be calculated where an adverse effect is observed. Even if there are no clear adverse effects at any dose level, a NOAEL can be derived (it will be the highest dose administered).”

December 8, 2017 Posted by | Books, Cancer/oncology, Epidemiology, Medicine, Statistics | Leave a comment

Nuclear power (I)

I originally gave the book 2 stars, but after I had finished this post I changed that rating to 3 stars (which was not that surprising; already when I wrote my goodreads review shortly after having read the book I was conflicted about whether or not the book deserved the third star). One thing that kept me from giving the book a higher rating was that I thought that the author did not spend enough time on ‘the basic concepts’, a problem I also highlighted in my goodreads review. I’d fortunately recently covered some of those concepts in other books in the series, so it wasn’t too hard for me to follow what was going on, but as sometimes happens for authors of books in this series, I think the author simply was trying to cover too much stuff. But even so this is a nice introductory text on this topic.

I have added some links and quotes related to the first half or so of the book below. I prepared the link list before I started gathering quotes for my coverage, so there may be more overlap in terms of which topics are covered both in the quotes and the links than there usually is (I normally tend to reserve the links for topics and concepts which are covered in these books that I don’t find it necessary to cover in detail in the text – the links are meant to remind me/indicate which sort of topics are also covered in the book, aside from the topics included in the text coverage).

“According to Einstein’s mass–energy equation, the mass of any composite stable object has to be less than the sum of the masses of the parts; the difference is the binding energy of the object. […] The general features of the binding energies are simply understood as follows. We have seen that the measured radii of nuclei [increase] with the cube root of the mass number A. This is consistent with a structure of close packed nucleons. If each nucleon could only interact with its closest neighbours, the total binding energy would then itself be proportional to the number of nucleons. However, this would be an overestimate because nucleons at the surface of the nucleus would not have a complete set of nearest neighbours with which to interact […]. The binding energy would be reduced by the number of surface nucleons and this would be proportional to the surface area, itself proportional to A2/3. So far we have considered only the attractive short-range nuclear binding. However, the protons carry an electric charge and hence experience an electrical repulsion between each other. The electrical force between two protons is much weaker than the nuclear force at short distances but dominates at larger distances. Furthermore, the total electrical contribution increases with the number of pairs of protons.”

“The main characteristics of the empirical binding energy of nuclei […] can now be explained. For the very light nuclei, all the nucleons are in the surface, the electrical repulsion is negligible, and the binding energy increases as the volume and number of nucleons increases. Next, the surface effects start to slow the rate of growth of the binding energy yielding a region of most stable nuclei near charge number Z = 28 (iron). Finally, the electrical repulsion steadily increases until we reach the most massive stable nucleus (lead-208). Between iron and lead, not only does the binding energy decrease so also do the proton to neutron ratios since the neutrons do not experience the electrical repulsion. […] as the nuclei get heavier the Coulomb repulsion term requires an increasing number of neutrons for stability […] For an explanation of [the] peaks, we must turn to the quantum nature of the problem. […] Filled shells corresponded to particularly stable electronic structures […] In the nuclear case, a shell structure also exists separately for both the neutrons and the protons. […] Closed-shell nuclei are referred to as ‘magic number’ nuclei. […] there is a particular stability for nuclei with equal numbers of protons and neutrons.”

“As we move off the line of stable nuclei, by adding or subtracting neutrons, the isotopes become increasingly less stable indicated by increasing levels of beta radioactivity. Nuclei with a surfeit of neutrons emit an electron, hence converting one of the neutrons into a proton, while isotopes with a neutron deficiency can emit a positron with the conversion of a proton into a neutron. For the heavier nuclei, the neutron to proton ratio can be reduced by emitting an alpha particle. All nuclei heavier than lead are unstable and hence radioactive alpha emitters. […] The fact that almost all the radioactive isotopes heavier than lead follow [a] kind of decay chain and end up as stable isotopes of lead explains this element’s anomalously high natural abundance.”

“When two particles collide, they transfer energy and momentum between themselves. […] If the target is much lighter than the projectile, the projectile sweeps it aside with little loss of energy and momentum. If the target is much heavier than the projectile, the projectile simply bounces off the target with little loss of energy. The maximum transfer of energy occurs when the target and the projectile have the same mass. In trying to slow down the neutrons, we need to pass them through a moderator containing scattering centres of a similar mass. The obvious candidate is hydrogen, in which the single proton of the nucleus is the particle closest in mass to the neutron. At first glance, it would appear that water, with its low cost and high hydrogen content, would be the ideal moderator. There is a problem, however. Slow neutrons can combine with protons to form an isotope of hydrogen, deuterium. This removes neutrons from the chain reaction. To overcome this, the uranium fuel has to be enriched by increasing the proportion of uranium-235; this is expensive and technically difficult. An alternative is to use heavy water, that is, water in which the hydrogen is replaced by deuterium. It is not quite as effective as a moderator but it does not absorb neutrons. Heavy water is more expensive and its production more technically demanding than natural water. Finally, graphite (carbon) has a mass of 12 and hence is less efficient requiring a larger reactor core, but it is inexpensive and easily available.”

“[During the Manhattan Project,] Oak Ridge, Tennessee, was chosen as the facility to develop techniques for uranium enrichment (increasing the relative abundance of uranium-235) […] a giant gaseous diffusion facility was developed. Gaseous uranium hexafluoride was forced through a semi permeable membrane. The lighter isotopes passed through faster and at each pass through the membrane the uranium hexafluoride became more and more enriched. The technology is very energy consuming […]. At its peak, Oak Ridge consumed more electricity than New York and Washington DC combined. Almost one-third of all enriched uranium is still produced by this now obsolete technology. The bulk of enriched uranium today is produced in high-speed centrifuges which require much less energy.”

“In order to sustain a nuclear chain reaction, it is essential to have a critical mass of fissile material. This mass depends upon the fissile fuel being used and the topology of the structure containing it. […] The chain reaction is maintained by the neutrons and many of these leave the surface without contributing to the reaction chain. Surrounding the fissile material with a blanket of neutron reflecting material, such as beryllium metal, will keep the neutrons in play and reduce the critical mass. Partially enriched uranium will have an increased critical mass and natural uranium (0.7% uranium-235) will not go critical at any mass without a moderator to increase the number of slow neutrons which are the dominant fission triggers. The critical mass can also be decreased by compressing the fissile material.”

“It is now more than 50 years since operations of the first civil nuclear reactor began. In the intervening years, several hundred reactors have been operating, in total amounting to nearly 50 million hours of experience. This cumulative experience has led to significant advances in reactor design. Different reactor types are defined by their choice of fuel, moderator, control rods, and coolant systems. The major advances leading to greater efficiency, increased economy, and improved safety are referred to as ‘generations’. […] [F]irst generation reactors […] had the dual purpose to make electricity for public consumption and plutonium for the Cold War stockpiles of nuclear weapons. Many of the features of the design were incorporated to meet the need for plutonium production. These impacted on the electricity-generating cost and efficiency. The most important of these was the use of unenriched uranium due to the lack of large-scale enrichment plants in the UK, and the high uranium-238 content was helpful in the plutonium production but made the electricity generation less efficient.”

PWRs, BWRs, and VVERs are known as LWRs (Light Water Reactors). LWRs dominate the world’s nuclear power programme, with the USA operating 69 PWRs and 35 BWRs; Japan operates 63 LWRs, the bulk of which are BWRs; and France has 59 PWRs. Between them, these three countries generate 56% of the world’s nuclear power. […] In the 1990s, a series of advanced versions of the Generation II and III reactors began to receive certification. These included the ACR (Advanced CANDU Reactor), the EPR (European Pressurized Reactor), and Westinghouse AP1000 and APR1400 reactors (all developments of the PWR) and ESBWR (a development of the BWR). […] The ACR uses slightly enriched uranium and a light water coolant, allowing the core to be halved in size for the same power output. […] It would appear that two of the Generation III+ reactors, the EPR […] and AP1000, are set to dominate the world market for the next 20 years. […] […] the EPR is considerably safer than current reactor designs. […] A major advance is that the generation 3+ reactors produce only about 10 % of waste compared with earlier versions of LWRs. […] China has officially adopted the AP1000 design as a standard for future nuclear plants and has indicated a wish to see 100 nuclear plants under construction or in operation by 2020.”

“All thermal electricity-generating systems are examples of heat engines. A heat engine takes energy from a high-temperature environment to a low-temperature environment and in the process converts some of the energy into mechanical work. […] In general, the efficiency of the thermal cycle increases as the temperature difference between the low-temperature environment and the high-temperature environment increases. In PWRs, and nearly all thermal electricity-generating plants, the efficiency of the thermal cycle is 30–35%. At the much higher operating temperatures of Generation IV reactors, typically 850–10000C, it is hoped to increase this to 45–50%.
During the operation of a thermal nuclear reactor, there can be a build-up of fission products known as reactor poisons. These are materials with a large capacity to absorb neutrons and this can slow down the chain reaction; in extremes, it can lead to a complete close-down. Two important poisons are xenon-135 and samarium-149. […] During steady state operation, […] xenon builds up to an equilibrium level in 40–50 hours when a balance is reached between […] production […] and the burn-up of xenon by neutron capture. If the power of the reactor is increased, the amount of xenon increases to a higher equilibrium and the process is reversed if the power is reduced. If the reactor is shut down the burn-up of xenon ceases, but the build-up of xenon continues from the decay of iodine. Restarting the reactor is impeded by the higher level of xenon poisoning. Hence it is desirable to keep reactors running at full capacity as long as possible and to have the capacity to reload fuel while the reactor is on line. […] Nuclear plants operate at highest efficiency when operated continually close to maximum generating capacity. They are thus ideal for provision of base load. If their output is significantly reduced, then the build-up of reactor poisons can impact on their efficiency.”

Links:

Radioactivity. Alpha decay. Beta decay. Gamma decay. Free neutron decay.
Periodic table.
Rutherford scattering.
Isotope.
Neutrino. Positron. Antineutrino.
Binding energy.
Mass–energy equivalence.
Electron shell.
Decay chain.
Heisenberg uncertainty principle.
Otto Hahn. Lise Meitner. Fritz Strassman. Enrico Fermi. Leo Szilárd. Otto Frisch. Rudolf Peierls.
Uranium 238. Uranium 235. Plutonium.
Nuclear fission.
Chicago Pile 1.
Manhattan Project.
Uranium hexafluoride.
Heavy water.
Nuclear reactor coolant. Control rod.
Critical mass. Nuclear chain reaction.
Magnox reactor. UNGG reactor. CANDU reactor.
ZEEP.
Nuclear reactor classifications (a lot of the distinctions included in this article are also included in the book and described in some detail. The topics included here are also covered extensively).
USS Nautilus.
Nuclear fuel cycle.
Thorium-based nuclear power.
Heat engine. Thermodynamic cycle. Thermal efficiency.
Reactor poisoning. Xenon 135. Samarium 149.
Base load.

December 7, 2017 Posted by | Books, Chemistry, Engineering, Physics | Leave a comment

Occupational Epidemiology (II)

Some more observations from the book below.

“RD [Retinal detachment] is the separation of the neurosensory retina from the underlying retinal pigment epithelium.1 RD is often preceded by posterior vitreous detachment — the separation of the posterior vitreous from the retina as a result of vitreous degeneration and shrinkage2 — which gives rise to the sudden appearance of floaters and flashes. Late symptoms of RD may include visual field defects (shadows, curtains) or even blindness. The success rate of RD surgery has been reported to be over 90%;3 however, a loss of visual acuity is frequently reported by patients, particularly if the macula is involved.4 Since the natural history of RD can be influenced by early diagnosis, patients experiencing symptoms of posterior vitreous detachment are advised to undergo an ophthalmic examination.5 […] Studies of the incidence of RD give estimates ranging from 6.3 to 17.9 cases per 100 000 person-years.6 […] Age is a well-known risk factor for RD. In most studies the peak incidence was recorded among subjects in their seventh decade of life. A secondary peak at a younger age (20–30 years) has been identified […] attributed to RD among highly myopic patients.6 Indeed, depending on the severity,
myopia is associated with a four- to ten-fold increase in risk of RD.7 [Diabetics with retinopathy are also at increased risk of RD, US] […] While secondary prevention of RD is current practice, no effective primary prevention strategy is available at present. The idea is widespread among practitioners that RD is not preventable, probably the consequence of our historically poor understanding of the aetiology of RD. For instance, on the website of the Mayo Clinic — one of the top-ranked hospitals for ophthalmology in the US — it is possible to read that ‘There’s no way to prevent retinal detachment’.9

“Intraocular pressure […] is influenced by physical activity. Dynamic exercise causes an acute reduction in intraocular pressure, whereas physical fitness is associated with a lower baseline value.29 Conversely, a sudden rise in intraocular pressure has been reported during the Valsalva manoeuvre.30-32 […] Occupational physical activity may […] cause both short- and long-term variations in intraocular pressure. On the one hand, physically demanding jobs may contribute to decreased baseline levels by increasing physical fitness but, on the other hand, lifting tasks may cause an important acute increase in pressure. Moreover, the eye of a manual worker who performs repeated lifting tasks involving the Valsalva manoeuvre may undergo several dramatic changes in intraocular pressure within a single working shift. […] A case-control study was carried out to test the hypothesis that repeated lifting tasks involving the Valsalva manoeuvre could be a risk factor for RD. […] heavy lifting was a strong risk factor for RD (OR 4.4, 95% CI 1.6–13). Intriguingly, body mass index (BMI) also showed a clear association with RD (top quartile: OR 6.8, 95% CI 1.6–29). […] Based on their findings, the authors concluded that heavy occupational lifting (involving the Valsalva manoeuvre) may be a relevant risk factor for RD in myopics.

“The proportion of the world’s population over 60 is forecast to double from 11.6% in 2012 to 21.8% in 2050.1 […] the International Labour Organization notes that, worldwide, just 40% of the working age population has legal pension coverage, and only 26% of the working population is effectively covered by old-age pension schemes. […] in less developed regions, labour force participation in those over 65 is much higher than in more developed regions.8 […] Longer working lives increase cumulative exposures, as well as increasing the time since exposure — important when there is a long latency period between exposure and resultant disease. Further, some exposures may have a greater effect when they occur to older workers, e.g. carcinogens that are promoters rather than initiators. […] Older workers tend to have more chronic health conditions. […] Older workers have fewer injuries, but take longer to recover. […] For some ‘knowledge workers’, like physicians, even a relatively minor cognitive decline […] might compromise their competence. […]  Most past studies have treated age as merely a confounding variable and rarely, if ever, have considered it an effect modifier. […]  Jex and colleagues24 argue that conceptually we should treat age as the variable of interest so that other variables are viewed as moderating the impact of age. […] The single best improvement to epidemiological research on ageing workers is to conduct longitudinal studies, including follow-up of workers into retirement. Cross-sectional designs almost certainly incur the healthy survivor effect, since unhealthy workers may retire early.25 […] Analyses should distinguish ageing per se, genetic factors, work exposures, and lifestyle in order to understand their relative and combined effects on health.”

“Musculoskeletal disorders have long been recognized as an important source of morbidity and disability in many occupational populations.1,2 Most musculoskeletal disorders, for most people, are characterized by recurrent episodes of pain that vary in severity and in their consequences for work. Most episodes subside uneventfully within days or weeks, often without any intervention, though about half of people continue to experience some pain and functional limitations after 12 months.3,4 In working populations, musculoskeletal disorders may lead to a spell of sickness absence. Sickness absence is increasingly used as a health parameter of interest when studying the consequences of functional limitations due to disease in occupational groups. Since duration of sickness absence contributes substantially to the indirect costs of illness, interventions increasingly address return to work (RTW).5 […] The Clinical Standards Advisory Group in the United Kingdom reported RTW within 2 weeks for 75% of all low back pain (LBP) absence episodes and suggested that approximately 50% of all work days lost due to back pain in the working population are from the 85% of people who are off work for less than 7 days.6″

Any RTW curve over time can be described with a mathematical Weibull function.15 This Weibull function is characterized by a scale parameter λ and a shape parameter k. The scale parameter λ is a function of different covariates that include the intervention effect, preferably expressed as hazard ratio (HR) between the intervention group and the reference group in a Cox’s proportional hazards regression model. The shape parameter k reflects the relative increase or decrease in survival time, thus expressing how much the RTW rate will decrease with prolonged sick leave. […] a HR as measure of effect can be introduced as a covariate in the scale parameter λ in the Weibull model and the difference in areas under the curve between the intervention model and the basic model will give the improvement in sickness absence days due to the intervention. By introducing different times of starting the intervention among those workers still on sick leave, the impact of timing of enrolment can be evaluated. Subsequently, the estimated changes in total sickness absence days can be expressed in a benefit/cost ratio (BC ratio), where benefits are the costs saved due to a reduction in sickness absence and costs are the expenditures relating to the intervention.15″

“A crucial factor in understanding why interventions are effective or not is the timing of the enrolment of workers on sick leave into the intervention. The RTW pattern over time […] has important consequences for appropriate timing of the best window for effective clinical and occupational interventions. The evidence presented by Palmer and colleagues clearly suggests that [in the context of LBP] a stepped care approach is required. In the first step of rapid RTW, most workers will return to work even without specific interventions. Simple, short interventions involving effective coordination and cooperation between primary health care and the workplace will be sufficient to help the majority of workers to achieve an early RTW. In the second step, more expensive, structured interventions are reserved for those who are having difficulties returning, typically between 4 weeks and 3 months. However, to date there is little evidence on the optimal timing of such interventions for workers on sick leave due to LBP.14,15 […] the cost-benefits of a structured RTW intervention among workers on sick leave will be determined by the effectiveness of the intervention, the natural speed of RTW in the target population, the timing of the enrolment of workers into the intervention, and the costs of both the intervention and of a day of sickness absence. […] The cost-effectiveness of a RTW intervention will be determined by the effectiveness of the intervention, the costs of the intervention and of a day of sickness absence, the natural course of RTW in the target population, the timing of the enrolment of workers into the RTW intervention, and the time lag before the intervention takes effect. The latter three factors are seldom taken into consideration in systematic reviews and guidelines for management of RTW, although their impact may easily be as important  as classical measures of effectiveness, such as effect size or HR.”

“In order to obtain information of the highest quality and utility, surveillance schemes have to be designed, set up, and managed with the same methodological rigour as high-calibre prospective cohort studies. Whether surveillance schemes are voluntary or not, considerable effort has to be invested to ensure a satisfactory and sufficient denominator, the best numerator quality, and the most complete ascertainment. Although the force of statute is relied upon in some surveillance schemes, even in these the initial and continuing motivation of the reporters (usually physicians) is paramount. […] There is a surveillance ‘pyramid’ within which the patient’s own perception is at the base, the GP is at a higher level, and the clinical specialist is close to the apex. The source of the surveillance reports affects the numerator because case severity and case mix differ according to the level in the pyramid.19 Although incidence rate estimates may be expected to be lower at the higher levels in the surveillance pyramid this is not necessarily always the case. […] Although surveillance undertaken by physicians who specialize in the organ system concerned or in occupational disease (or in both aspects) may be considered to be the medical ‘gold standard’ it can suffer from a more limited patient catchment because of various referral filters. Surveillance by GPs will capture numerator cases as close to the base of the pyramid as possible, but may suffer from greater diagnostic variation than surveillance by specialists. Limiting recruitment to GPs with a special interest, and some training, in occupational medicine is a compromise between the two levels.20

“When surveillance is part of a statutory or other compulsory scheme then incident case identification is a continuous and ongoing process. However, when surveillance is voluntary, for a research objective, it may be preferable to sample over shorter, randomly selected intervals, so as to reduce the demands associated with the data collection and ‘reporting fatigue’. Evidence so far suggests that sampling over shorter time intervals results in higher incidence estimates than continuous sampling.21 […] Although reporting fatigue is an important consideration in tempering conclusions drawn from […] multilevel models, it is possible to take account of this potential bias in various ways. For example, when evaluating interventions, temporal trends in outcomes resulting from other exposures can be used to control for fatigue.23,24 The phenomenon of reporting fatigue may be characterized by an ‘excess of zeroes’ beyond what is expected of a Poisson distribution and this effect can be quantified.27 […] There are several considerations in determining incidence from surveillance data. It is possible to calculate an incidence rate based on the general population, on the population of working age, or on the total working population,19 since these denominator bases are generally readily available, but such rates are not the most useful in determining risk. Therefore, incidence rates are usually calculated in respect of specific occupations or industries.22 […] Ideally, incidence rates should be expressed in relation to quantitative estimates of exposure but most surveillance schemes would require additional data collection as special exercises to achieve this aim.” [for much more on these topics, see also M’ikanatha & Iskander’s book.]

“Estimates of lung cancer risk attributable to occupational exposures vary considerably by geographical area and depend on study design, especially on the exposure assessment method, but may account for around 5–20% of cancers among men, but less (<5%) among women;2 among workers exposed to (suspected) lung carcinogens, the percentage will be higher. […] most exposure to known lung carcinogens originates from occupational settings and will affect millions of workers worldwide.  Although it has been established that these agents are carcinogenic, only limited evidence is available about the risks encountered at much lower levels in the general population. […] One of the major challenges in community-based occupational epidemiological studies has been valid assessment of the occupational exposures experienced by the population at large. Contrary to the detailed information usually available for an industrial population (e.g. in a retrospective cohort study in a large chemical company) that often allows for quantitative exposure estimation, community-based studies […] have to rely on less precise and less valid estimates. The choice of method of exposure assessment to be applied in an epidemiological study depends on the study design, but it boils down to choosing between acquiring self-reported exposure, expert-based individual exposure assessment, or linking self-reported job histories with job-exposure matrices (JEMs) developed by experts. […] JEMs have been around for more than three decades.14 Their main distinction from either self-reported or expert-based exposure assessment methods is that exposures are no longer assigned at the individual subject level but at job or task level. As a result, JEMs make no distinction in assigned exposure between individuals performing the same job, or even between individuals performing a similar job in different companies. […] With the great majority of occupational exposures having a rather low prevalence (<10%) in the general population it is […] extremely important that JEMs are developed aiming at a highly specific exposure assessment so that only jobs with a high likelihood (prevalence) and intensity of exposure are considered to be exposed. Aiming at a high sensitivity would be disastrous because a high sensitivity would lead to an enormous number of individuals being assigned an exposure while actually being unexposed […] Combinations of the methods just described exist as well”.

“Community-based studies, by definition, address a wider range of types of exposure and a much wider range of encountered exposure levels (e.g. relatively high exposures in primary production but often lower in downstream use, or among indirectly exposed individuals). A limitation of single community-based studies is often the relatively low number of exposed individuals. Pooling across studies might therefore be beneficial. […] Pooling projects need careful planning and coordination, because the original studies were conducted for different purposes, at different time periods, using different questionnaires. This heterogeneity is sometimes perceived as a disadvantage but also implies variations that can be studied and thereby provide important insights. Every pooling project has its own dynamics but there are several general challenges that most pooling projects confront. Creating common variables for all studies can stretch from simple re-naming of variables […] or recoding of units […] to the re-categorization of national educational systems […] into years of formal education. Another challenge is to harmonize the different classification systems of, for example, diseases (e.g. International Classification of Disease (ICD)-9 versus ICD-10), occupations […], and industries […]. This requires experts in these respective fields as well as considerable time and money. Harmonization of data may mean losing some information; for example, ISCO-68 contains more detail than ISCO-88, which makes it possible to recode ISCO-68 to ISCO-88 with only a little loss of detail, but it is not possible to recode ISCO-88 to ISCO-68 without losing one or two digits in the job code. […] Making the most of the data may imply that not all studies will qualify for all analyses. For example, if a study did not collect data regarding lung cancer cell type, it can contribute to the overall analyses but not to the cell type-specific analyses. It is important to remember that the quality of the original data is critical; poor data do not become better by pooling.”

December 6, 2017 Posted by | Books, Cancer/oncology, Demographics, Epidemiology, Health Economics, Medicine, Ophthalmology, Statistics | Leave a comment

The history of astronomy

It’s been a while since I read this book, and I was for a while strongly considering not blogging it at all. In the end I figured I ought to cover it after all in at least a little bit of detail, though when I made the decision to cover the book here I also decided not to cover it in nearly as much detail as I usually cover the books in this series.

Below some random observations from the book which I found sufficiently interesting to add here.

“The Almagest is a magisterial work that provided geometrical models and related tables by which the movements of the Sun, Moon, and the five lesser planets could be calculated for the indefinite future. […] Its catalogue contains over 1,000 fixed stars arranged in 48 constellations, giving the longitude, latitude, and apparent brightness of each. […] the Almagest would dominate astronomy like a colossus for 14 centuries […] In the universities of the later Middle Ages, students would be taught Aristotle in philosophy and a simplified Ptolemy in astronomy. From Aristotle they would learn the basic truth that the heavens rotate uniformly about the central Earth. From the simplified Ptolemy they would learn of epicycles and eccentrics that violated this basic truth by generating orbits whose centre was not the Earth; and those expert enough to penetrate deeper into the Ptolemaic models would encounter equant theories that violated the (yet more basic) truth that heavenly motion is uniform. […] with the models of the Almagest – whose parameters would be refined over the centuries to come – the astronomer, and the astrologer, could compute the future positions of the planets with economy and reasonable accuracy. There were anomalies – the Moon, for example, would vary its apparent size dramatically in the Ptolemaic model but does not do so in reality, and Venus and Mercury were kept close to the Sun in the sky by a crude ad hoc device – but as a geometrical compendium of how to grind out planetary tables, the Almagest worked, and that was what mattered.”

“The revival of astronomy – and astrology – among the Latins was stimulated around the end of the first millennium when the astrolabe entered the West from Islamic Spain. Astrology in those days had a [‘]rational[‘] basis rooted in the Aristotelian analogy between the microcosm – the individual living body – and the macrocosm, the cosmos as a whole. Medical students were taught how to track the planets, so that they would know when the time was favourable for treating the corresponding organs in their patients.” [Aaargh! – US]

“The invention of printing in the 15th century had many consequences, none more significant than the stimulus it gave to the mathematical sciences. All scribes, being human, made occasional errors in preparing a copy of a manuscript. These errors would often be transmitted to copies of the copy. But if the works were literary and the later copyists attended to the meaning of the text, they might recognize and correct many of the errors introduced by their predecessors. Such control could rarely be exercised by copyists required to reproduce texts with significant numbers of mathematical symbols. As a result, a formidable challenge faced the medieval student of a mathematical or astronomical treatise, for it was available to him only in a manuscript copy that had inevitably become corrupt in transmission. After the introduction of printing, all this changed.”

“Copernicus, like his predecessors, had been content to work with observations handed down from the past, making new ones only when unavoidable and using instruments that left much to be desired. Tycho [Brahe], whose work marks the watershed between observational astronomy ancient and modern, saw accuracy of observation as the foundation of all good theorizing. He dreamed of having an observatory where he could pursue the research and development of precision instrumentation, and where a skilled team of assistants would test the instruments even as they were compiling a treasury of observations. Exploiting his contacts at the highest level, Tycho persuaded King Frederick II of Denmark to grant him the fiefdom of the island of Hven, and there, between 1576 and 1580, he constructed Uraniborg (‘Heavenly Castle’), the first scientific research institution of the modern era. […] Tycho was the first of the modern observers, and in his catalogue of 777 stars the positions of the brightest are accurate to a minute or so of arc; but he himself was probably most proud of his cosmology, which Galileo was not alone in seeing as a retrograde compromise. Tycho appreciated the advantages of heliocentic planetary models, but he was also conscious of the objections […]. In particular, his inability to detect annual parallax even with his superb instrumentation implied that the Copernican excuse, that the stars were too far away for annual parallax to be detected, was now implausible in the extreme. The stars, he calculated, would have to be at least 700 times further away than Saturn for him to have failed for this reason, and such a vast, purposeless empty space between the planets and the stars made no sense. He therefore looked for a cosmology that would have the geometrical advantages of the heliocentric models but would retain the Earth as the body physically at rest at the centre of the cosmos. The solution seems obvious in hindsight: make the Sun (and Moon) orbit the central Earth, and make the five planets into satellites of the Sun.”

“Until the invention of the telescope, each generation of astronomers had looked at much the same sky as their predecessors. If they knew more, it was chiefly because they had more books to read, more records to mine. […] Galileo could say of his predecessors, ‘If they had seen what we see, they would have judged as we judge’; and ever since his time, the astronomers of each generation have had an automatic advantage over their predecessors, because they possess apparatus that allows them access to objects unseen, unknown, and therefore unstudied in the past. […] astronomers [for a long time] found themselves in a situation where, as telescopes improved, the two coordinates of a star’s position on the heavenly sphere were being measured with ever increasing accuracy, whereas little was known of the star’s third coordinate, distance, except that its scale was enormous. Even the assumption that the nearest stars were the brightest was […rightly, US] being called into question, as the number of known proper motions increased and it emerged that not all the fastest-moving stars were bright.”

“We know little of how Newton’s thinking developed between 1679 and the visit from Halley in 1684, except for a confused exchange of letters between Newton and the Astronomer Royal, John Flamsteed […] the visit from the suitably deferential and tactful Halley encouraged Newton to promise him written proof that elliptical orbits would result from an inverse-square force of attraction residing in the Sun. The drafts grew and grew, and eventually resulted in The Mathematical Principles of Natural Philosophy (1687), better known in its abbreviated Latin title of the Principia. […] All three of Kepler’s laws (the second in ‘area’ form), which had been derived by their author from observations, with the help of a highly dubious dynamics, were now shown to be consequences of rectilinear motion under an inverse-square force. […] As the drafts of Principia multiplied, so too did the number of phenomena that at last found their explanation. The tides resulted from the difference between the effects on the land and on the seas of the attraction of Sun and Moon. The spinning Earth bulged at the equator and was flattened at the poles, and so was not strictly spherical; as a result, the attraction of Sun and Moon caused the Earth’s axis to wobble and so generated the precession of the equinoxes first noticed by Hipparchus. […] Newton was able to use the observed motions of the moons of Earth, Jupiter, and Saturn to calculate the masses of the parent planets, and he found that Jupiter and Saturn were huge compared to Earth – and, in all probability, to Mercury, Venus, and Mars.”

December 5, 2017 Posted by | Astronomy, Books, History, Mathematics, Physics | Leave a comment

Occupational Epidemiology (I)

Below some observations from the first chapters of the book, which I called ‘very decent’ on goodreads.

“Coal workers were amongst the first occupational groups to be systematically studied in well-designed epidemiological research programmes. As a result, the causes and spectrum of non-malignant respiratory disease among coal workers have been rigorously explored and characterized.1,2 While respirable silica (quartz) in mining has long been accepted as a cause of lung disease, the important contributing role of coal mine dust was questioned until the middle of the twentieth century.3 Occupational exposure to coal mine dust has now been shown unequivocally to cause excess mortality and morbidity from non-malignant respiratory disease, including coal workers’ pneumoconiosis (CWP) and chronic obstructive pulmonary disease (COPD). The presence of respirable quartz, often a component of coal mine dust, contributes to disease incidence and severity, increasing the risk of morbidity and mortality in exposed workers.”

Coal is classified into three major coal ranks: lignite, bituminous, and anthracite from lowest to highest carbon content and heating value. […] In the US, the Bureau of Mines and the Public Health Service actively studied anthracite and bituminous coal mines and miners throughout the mid-1900s.3 These studies showed significant disease among workers with minimal silica exposure, suggesting that coal dust itself was toxic; however, these results were suppressed and not widely distributed. It was not until the 1960s that a popular movement of striking coal miners and their advocates demanded legislation to prevent, study, and compensate miners for respiratory diseases caused by coal dust exposure. […] CWP [Coal Workers’ Pneumoconiosis] is an interstitial lung disease resulting from the accumulation of coal mine dust in miners’ lungs and the tissue reaction to its presence. […] It is classified […] as simple or complicated; the latter is also known as progressive massive fibrosis (PMF) […] PMF is a progressive, debilitating disease which is predictive of disability and mortality […] A causal exposure-response relationship has been established between cumulative coal mine dust exposure and risk of developing both CWP and PMF,27-31 and with mortality from pneumoconiosis and PMF.23-26, 30 Incidence, the stage of CWP, and progression to PMF, as well as mortality, are positively associated with increasing proportion of respirable silica in the coal mine dust32 and higher coal rank. […] Not only do coal workers experience occupational mortality from CWP and PMF,12, 23-26 they also have excess mortality from COPD compared to the general population. Cross-sectional and longitudinal studies […] have demonstrated an exposure-response relationship between cumulative coal mine dust exposure and chronic bronchitis,36-40 respiratory symptoms,41 and pulmonary function even in the presence of normal radiographic findings.42 The relationship between the rate of decline of lung function and coal mine dust exposure is not linear, the greatest reduction occurring in the first few years of exposure.43

“Like most occupational cohort studies, those of coal workers are affected by the healthy worker effect. A strength of the PFR and NCS studies is the ability to use internal analysis (i.e. comparing workers by exposure level) which controls for selection bias at hire, one component of the effect.59 However, internal analyses may not fully control for ongoing selection bias if symptoms of adverse health effects are related to exposure (referred to as the healthy worker survivor effect) […] Work status is a key component of the healthy worker survivor effect, as are length of time since entering the industry and employment duration.61 Both the PFR and NCS studies have consistently found higher rates of symptoms and disease among former miners compared with current miners, consistent with a healthy worker survivor effect.62,63″

“Coal mining is rapidly expanding in the developing world. From 2007 to 2010 coal production declined in the US by 6% and Europe by 10% but increased in Eurasia by 9%, in Africa by 3%, and in Asia and Oceania by 19%.71 China saw a dramatic increase of 39% from 2007 to 2011. There have been few epidemiological studies published that characterize the disease burden among coal workers during this expansion but, in one study conducted among miners in Liaoning Province, China, rates of CWP were high.72 There are an estimated six million underground miners in China at present;73 hence even low disease rates will cause a high burden of illness and excess premature mortality.”

“Colonization with S. aureus may occur on mucous membranes of the respiratory or intestinal tract, or on other body surfaces, and is usually asymptomatic. Nasal colonization with S. aureus in the human population occurs among around 30% of individuals. Methicillin-resistant S. aureus (MRSA) are strains that have developed resistance to beta-lactam antibiotics […] and, as a result, may cause difficult-to-treat infections in humans. Nasal colonization with MRSA in the general population is low; the highest rate reported in a population-based survey was 1.5%.2,3 Infections with MRSA are associated with treatment failure and increased severity of disease.4,5 […] In 2004 a case of, at that time non-typeable, MRSA was reported in a 6-month-old girl admitted to a hospital in the Netherlands. […] Later on, this strain and some related strains appeared strongly associated with livestock production, and were labelled livestock-associated MRSA (LA-MRSA) and are nowadays referred to as MRSA ST398. […] It is common knowledge that the use of antimicrobial agents in humans, animals, and plants promotes the selection and spread of antimicrobial-resistant bacteria and resistance genes through genetic mutations and gene transfer.15 Antimicrobial agents are widely used in veterinary medicine and modern food animal production depends on the use of large amounts of antimicrobials for disease control. Use of antimicrobials probably played an important role in the emergence of MRSA ST398.”

MRSA was rarely isolated from animals before 2000. […] Since 2005 onwards, LA-MRSA has been increasingly frequently reported in different food production animals, including cattle, pigs, and poultry […] The MRSA case illustrates the rapid emergence, and transmission from animals to humans, of a new strain of resistant micro-organisms from an animal reservoir, creating risks for different occupational groups. […] High animal-to-human transmission of ST398 has been reported in pig farming, leading to an elevated prevalence of nasal MRSA carriage ranging from a few per cent in Ireland up to 86% in German pig farmers […]. One study showed a clear association between the prevalence of MRSA carriage among participants from farms with MRSA colonized pigs (50%) versus 3% on farms without colonized pigs […] MRSA prevalence is low among animals from alternative breeding systems with low use of antimicrobials, also leading to low carriage rates in farmers.71 […] Veterinarians are […] frequently in direct contact with livestock, and are clearly at elevated risk of LA-MRSA carriage when compared to the general population. […] Of all LA-MRSA carrying individuals, a fraction appear to be persistent carriers. […] Few studies have examined transmission from humans to humans. Generally, studies among family members of livestock farmers show a considerably lower prevalence than among the farmers with more intense animal contact. […] Individuals who are ST398 carriers in the general population usually have direct animal contact.43,44 On the other hand, the emergence of ST398 isolates without known risk factors for acquisition and without a link to livestock has been reported.45 In addition, a human-specific ST398 clone has recently been identified and thus the spread of LA-MRSA from occupational populations to the general population cannot be ruled out.46 Transmission dynamics, especially between humans not directly exposed to animals, remain unclear and might be changing.”

“Enterobacteriaceae that produce ESBLs are an emerging concern in public health. ESBLs inactivate beta-lactam antimicrobials by hydrolysis and therefore cause resistance to various beta-lactam antimicrobials, including penicillins and cephalosporins.54 […] The genes encoding for ESBLs are often located on plasmids which can be transferred between different bacterial species. Also, coexistence with other types of antimicrobial resistance occurs. In humans, infections with ESBL-producing Enterobacteriaceae are associated with increased burden of disease and costs.58 A variety of ESBLs have been identified in bacteria derived from food-producing animals worldwide. The occurrence of different ESBL types depends on the animal species and the geographical area. […] High use of antimicrobials and inappropriate use of cephalosporins in livestock production are considered to be associated with the emergence and high prevalence of ESBL-producers in the animals.59-60 Food-producing animals can serve as a reservoir for ESBL producing Enterobacteriaceae and ESBL genes. […] recent findings suggest that transmission from animals to humans may occur through (in)direct contact with livestock during work. This may thus pose an occupational health risk for farmers and potentially for other humans with regular contact with this working population. […] Compared to MRSA, the dynamics of ESBLs seem more complex. […] The variety of potential ESBL transmission routes makes it complex to determine the role of direct contact with livestock as an occupational risk for ESBL carriage. However, the increasing occurrence of ESBLs in livestock worldwide and the emerging insight into transmission through direct contact suggests that farmers have a higher risk of becoming a carrier of ESBLs. Until now, there have not been sufficient data available to quantify the relevant importance of this route of transmission.”

“Welders die more often from pneumonia than do their social class peers. This much has been revealed by successive analyses of occupational mortality for England and Wales. The pattern can now be traced back more than seven decades. During 1930–32, 285 deaths were observed with 171 expected;3 in 1949–53, 70 deaths versus 31 expected;4 in 1959–63, 101 deaths as compared with 54.9 expected;5 and in 1970–72, 66 deaths with 42.0 expected.6 […] The finding that risks decline after retirement is an argument against confounding by lifestyle variables such as smoking, as is the specificity of effect to lobar rather than bronchopneumonia. […] Analyses of death certificates […] support a case for a hazard that is reversible when exposure stops. […] In line with the mortality data, hospitalized pneumonia [has also] prove[n] to be more common among welders and other workers with exposure to metal fume than in workers from non-exposed jobs. Moreover, risks were confined to exposures in the previous 12 months […] Recently, inhalation experiments have confirmed that welding fume can promote bacterial growth in animals. […] A coherent body of evidence thus indicates that metal fume is a hazard for pneumonia. […] Presently, knowledge is lacking on the exposure-response relationship and what constitutes a ‘safe’ or ‘unsafe’ level or pattern of exposure to metal fume. […]  The pattern of epidemiological evidence […] is generally compatible with a hazard from iron in metal fume. Iron could promote infective risk in at least one of two ways: by acting as a growth nutrient for microorganisms, or as a cause of free radical injury. […] the Joint Committee on Vaccination and Immunisation, on behalf of the Department of Health in England, decided in November 2011 to recommend that ‘welders who have not received the pneumococcal polysaccharide vaccine (PPV23) previously should be offered a single dose of 0.5ml of PPV23 vaccine’ and that ‘employers should ensure that provision is in place for workers to receive PPV23’.”

December 2, 2017 Posted by | Books, Epidemiology, Infectious disease, Medicine | Leave a comment

Quotes

i. “The party that negotiates in haste is often at a disadvantage.” (Howard Raiffa)

ii. “Advice: don’t embarrass your bargaining partner by forcing him or her to make all the concessions.” (-ll-)

iii. “Disputants often fare poorly when they each act greedily and deceptively.” (-ll-)

iv. “Each man does seek his own interest, but, unfortunately, not according to the dictates of reason.” (Kenneth Waltz)

v. “Whatever is said after I’m gone is irrelevant.” (Jimmy Savile)

vi. “Trust is an important lubricant of a social system. It is extremely efficient; it saves a lot of trouble to have a fair degree of reliance on other people’s word. Unfortunately this is not a commodity which can be bought very easily. If you have to buy it, you already have some doubts about what you have bought.” (Kenneth Arrow)

vii. “… an author never does more damage to his readers than when he hides a difficulty.” (Évariste Galois)

viii. “A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail” (Vladimir Voevodsky)

ix. “Suppose you want to teach the “cat” concept to a very young child. Do you explain that a cat is a relatively small, primarily carnivorous mammal with retractible claws, a distinctive sonic output, etc.? I’ll bet not. You probably show the kid a lot of different cats, saying “kitty” each time, until it gets the idea. To put it more generally, generalizations are best made by abstraction from experience. They should come one at a time; too many at once overload the circuits.” (Ralph P. Boas Jr.)

x. “Every author has several motivations for writing, and authors of technical books always have, as one motivation, the personal need to understand; that is, they write because they want to learn, or to understand a phenomenon, or to think through a set of ideas.” (Albert Wymore)

xi. “Great mathematics is achieved by solving difficult problems not by fabricating elaborate theories in search of a problem.” (Harold Davenport)

xii. “Is science really gaining in its assault on the totality of the unsolved? As science learns one answer, it is characteristically true that it also learns several new questions. It is as though science were working in a great forest of ignorance, making an ever larger circular clearing within which, not to insist on the pun, things are clear… But as that circle becomes larger and larger, the circumference of contact with ignorance also gets longer and longer. Science learns more and more. But there is an ultimate sense in which it does not gain; for the volume of the appreciated but not understood keeps getting larger. We keep, in science, getting a more and more sophisticated view of our essential ignorance.” (Warren Weaver)

xiii. “When things get too complicated, it sometimes makes sense to stop and wonder: Have I asked the right question?” (Enrico Bombieri)

xiv. “The mean and variance are unambiguously determined by the distribution, but a distribution is, of course, not determined by its mean and variance: A number of different distributions have the same mean and the same variance.” (Richard von Mises)

xv. “Algorithms existed for at least five thousand years, but people did not know that they were algorithmizing. Then came Turing (and Post and Church and Markov and others) and formalized the notion.” (Doron Zeilberger)

xvi. “When a problem seems intractable, it is often a good idea to try to study “toy” versions of it in the hope that as the toys become increasingly larger and more sophisticated, they would metamorphose, in the limit, to the real thing.” (-ll-)

xvii. “The kind of mathematics foisted on children in schools is not meaningful, fun, or even very useful. This does not mean that an individual child cannot turn it into a valuable and enjoyable personal game. For some the game is scoring grades; for others it is outwitting the teacher and the system. For many, school math is enjoyable in its repetitiveness, precisely because it is so mindless and dissociated that it provides a shelter from having to think about what is going on in the classroom. But all this proves is the ingenuity of children. It is not a justifications for school math to say that despite its intrinsic dullness, inventive children can find excitement and meaning in it.” (Seymour Papert)

xviii. “The optimist believes that this is the best of all possible worlds, and the pessimist fears that this might be the case.” (Ivar Ekeland)

xix. “An equilibrium is not always an optimum; it might not even be good. This may be the most important discovery of game theory.” (-ll-)

xxi. “It’s not all that rare for people to suffer from a self-hating monologue. Any good theories about what’s going on there?”

“If there’s things you don’t like about your life, you can blame yourself, or you can blame others. If you blame others and you’re of low status, you’ll be told to cut that out and start blaming yourself. If you blame yourself and you can’t solve the problems, self-hate is the result.” (Nancy Lebovitz & ‘The Nybbler’)

December 1, 2017 Posted by | Mathematics, Quotes/aphorisms, Science, Statistics | 4 Comments

Concussion and Sequelae of Minor Head Trauma

Some related links:

PECARN Pediatric Head Injury/Trauma Algorithm.
Canadian CT Head Injury/Trauma Rule.
ACEP – Traumatic Brain Injury (Mild – Adult).
AANS – concussion.
Guidelines for the Management of Severe Traumatic Brain Injury – 4th edition.
Return-to-play guidelines.
Second-impact syndrome.
Repetitive Head Injury Syndrome (medscape).
Traumatic Brain Injury & Concussion (CDC).

December 1, 2017 Posted by | Lectures, Medicine, Neurology | Leave a comment

A few diabetes papers of interest

i. Mechanisms and Management of Diabetic Painful Distal Symmetrical Polyneuropathy.

“Although a number of the diabetic neuropathies may result in painful symptomatology, this review focuses on the most common: chronic sensorimotor distal symmetrical polyneuropathy (DSPN). It is estimated that 15–20% of diabetic patients may have painful DSPN, but not all of these will require therapy. […] Although the exact pathophysiological processes that result in diabetic neuropathic pain remain enigmatic, both peripheral and central mechanisms have been implicated, and extend from altered channel function in peripheral nerve through enhanced spinal processing and changes in many higher centers. A number of pharmacological agents have proven efficacy in painful DSPN, but all are prone to side effects, and none impact the underlying pathophysiological abnormalities because they are only symptomatic therapy. The two first-line therapies approved by regulatory authorities for painful neuropathy are duloxetine and pregabalin. […] All patients with DSPN are at increased risk of foot ulceration and require foot care, education, and if possible, regular podiatry assessment.”

“The neuropathies are the most common long-term microvascular complications of diabetes and affect those with both type 1 and type 2 diabetes, with up to 50% of older type 2 diabetic patients having evidence of a distal neuropathy (1). These neuropathies are characterized by a progressive loss of nerve fibers affecting both the autonomic and somatic divisions of the nervous system. The clinical features of the diabetic neuropathies vary immensely, and only a minority are associated with pain. The major portion of this review will be dedicated to the most common painful neuropathy, chronic sensorimotor distal symmetrical polyneuropathy (DSPN). This neuropathy has major detrimental effects on its sufferers, confirming an increased risk of foot ulceration and Charcot neuroarthropathy as well as being associated with increased mortality (1).

In addition to DSPN, other rarer neuropathies may also be associated with painful symptoms including acute painful neuropathy that often follows periods of unstable glycemic control, mononeuropathies (e.g., cranial nerve palsies), radiculopathies, and entrapment neuropathies (e.g., carpal tunnel syndrome). By far the most common presentation of diabetic polyneuropathy (over 90%) is typical DSPN or chronic DSPN. […] DSPN results in insensitivity of the feet that predisposes to foot ulceration (1) and/or neuropathic pain (painful DSPN), which can be disabling. […] The onset of DSPN is usually gradual or insidious and is heralded by sensory symptoms that start in the toes and then progress proximally to involve the feet and legs in a stocking distribution. When the disease is well established in the lower limbs in more severe cases, there is upper limb involvement, with a similar progression proximally starting in the fingers. As the disease advances further, motor manifestations, such as wasting of the small muscles of the hands and limb weakness, become apparent. In some cases, there may be sensory loss that the patient may not be aware of, and the first presentation may be a foot ulcer. Approximately 50% of patients with DSPN experience neuropathic symptoms in the lower limbs including uncomfortable tingling (dysesthesia), pain (burning; shooting or “electric-shock like”; lancinating or “knife-like”; “crawling”, or aching etc., in character), evoked pain (allodynia, hyperesthesia), or unusual sensations (such as a feeling of swelling of the feet or severe coldness of the legs when clearly the lower limbs look and feel fine, odd sensations on walking likened to “walking on pebbles” or “walking on hot sand,” etc.). There may be marked pain on walking that may limit exercise and lead to weight gain. Painful DSPN is characteristically more severe at night and often interferes with normal sleep (3). It also has a major impact on the ability to function normally (both mental and physical functioning, e.g., ability to maintain work, mood, and quality of life [QoL]) (3,4). […] The unremitting nature of the pain can be distressing, resulting in mood disorders including depression and anxiety (4). The natural history of painful DSPN has not been well studied […]. However, it is generally believed that painful symptoms may persist over the years (5), occasionally becoming less prominent as the sensory loss worsens (6).”

“There have been relatively few epidemiological studies that have specifically examined the prevalence of painful DSPN, which range from 10–26% (79). In a recent study of a large cohort of diabetic patients receiving community-based health care in northwest England (n = 15,692), painful DSPN assessed using neuropathy symptom and disability scores was found in 21% (7). In one population-based study from Liverpool, U.K., the prevalence of painful DSPN assessed by a structured questionnaire and examination was estimated at 16% (8). Notably, it was found that 12.5% of these patients had never reported their symptoms to their doctor and 39% had never received treatment for their pain (8), indicating that there may be considerable underdiagnosis and undertreatment of painful neuropathic symptoms compared with other aspects of diabetes management such as statin therapy and management of hypertension. Risk factors for DSPN per se have been extensively studied, and it is clear that apart from poor glycemic control, cardiovascular risk factors play a prominent role (10): risk factors for painful DSPN are less well known.”

“A broad spectrum of presentations may occur in patients with DSPN, ranging from one extreme of the patient with very severe painful symptoms but few signs, to the other when patients may present with a foot ulcer having lost all sensation without ever having any painful or uncomfortable symptoms […] it is well recognized that the severity of symptoms may not relate to the severity of the deficit on clinical examination (1). […] Because DSPN is a diagnosis of exclusion, a careful clinical history and a peripheral neurological and vascular examination of the lower limbs are essential to exclude other causes of neuropathic pain and leg/foot pain such as peripheral vascular disease, arthritis, malignancy, alcohol abuse, spinal canal stenosis, etc. […] Patients with asymmetrical symptoms and/or signs (such as loss of an ankle jerk in one leg only), rapid progression of symptoms, or predominance of motor symptoms and signs should be carefully assessed for other causes of the findings.”

“The fact that diabetes induces neuropathy and that in a proportion of patients this is accompanied by pain despite the loss of input and numbness, suggests that marked changes occur in the processes of pain signaling in the peripheral and central nervous system. Neuropathic pain is characterized by ongoing pain together with exaggerated responses to painful and nonpainful stimuli, hyperalgesia, and allodynia. […] the changes seen suggest altered peripheral signaling and central compensatory changes perhaps driven by the loss of input. […] Very clear evidence points to the key role of changes in ion channels as a consequence of nerve damage and their roles in the disordered activity and transduction in damaged and intact fibers (50). Sodium channels depolarize neurons and generate an action potential. Following damage to peripheral nerves, the normal distribution of these channels along a nerve is disrupted by the neuroma and “ectopic” activity results from the accumulation of sodium channels at or around the site of injury. Other changes in the distribution and levels of these channels are seen and impact upon the pattern of neuronal excitability in the nerve. Inherited pain disorders arise from mutated sodium channels […] and polymorphisms in this channel impact on the level of pain in patients, indicating that inherited differences in channel function might explain some of the variability in pain between patients with DSPN (53). […] Where sodium channels act to generate action potentials, potassium channels serve as the molecular brakes of excitable cells, playing an important role in modulating neuronal hyperexcitability. The drug retigabine, a potassium channel opener acting on the channel (KV7, M-current) opener, blunts behavioral hypersensitivity in neuropathic rats (56) and also inhibits C and Aδ-mediated responses in dorsal horn neurons in both naïve and neuropathic rats (57), but has yet to reach the clinic as an analgesic”.

and C fibers terminate primarily in the superficial laminae of the dorsal horn where the large majority of neurons are nociceptive specific […]. Some of these neurons gain low threshold inputs after neuropathy and these cells project predominantly to limbic brain areas […] spinal cord neurons provide parallel outputs to the affective and sensory areas of the brain. Changes induced in these neurons by repeated noxious inputs underpin central sensitization where the resultant hyperexcitability of neurons leads to greater responses to all subsequent inputs — innocuous and noxious — expanded receptive fields and enhanced outputs to higher levels of the brain […] As a consequence of these changes in the sending of nociceptive information within the peripheral nerve and then the spinal cord, the information sent to the brain becomes amplified so that pain ratings become higher. Alongside this, the persistent input into the limbic brain areas such as the amygdala are likely to be causal in the comorbidities that patients often report due to ongoing painful inputs disrupting normal function and generating fear, depression, and sleep problems […]. Of course, many patients report that their pains are worse at night, which may be due to nocturnal changes in these central pain processing areas. […] overall, the mechanisms of pain in diabetic neuropathy extend from altered channel function in peripheral nerves through enhanced spinal processing and finally to changes in many higher centers”.

Pharmacological treatment of painful DSPN is not entirely satisfactory because currently available drugs are often ineffective and complicated by adverse events. Tricyclic compounds (TCAs) have been used as first-line agents for many years, but their use is limited by frequent side effects that may be central or anticholinergic, including dry mouth, constipation, sweating, blurred vision, sedation, and orthostatic hypotension (with the risk of falls particularly in elderly patients). […] Higher doses have been associated with an increased risk of sudden cardiac death, and caution should be taken in any patient with a history of cardiovascular disease (65). […] The selective serotonin noradrenalin reuptake inhibitors (SNRI) duloxetine and venlafaxine have been used for the management of painful DSPN (65). […] there have been several clinical trials involving pregabalin in painful DSPN, and these showed clear efficacy in management of painful DSPN (69). […] The side effects include dizziness, somnolence, peripheral edema, headache, and weight gain.”

A major deficiency in the area of the treatment of neuropathic pain in diabetes is the relative lack of comparative or combination studies. Virtually all previous trials have been of active agents against placebo, whereas there is a need for more studies that compare a given drug with an active comparator and indeed lower-dose combination treatments (64). […] The European Federation of Neurological Societies proposed that first-line treatments might comprise of TCAs, SNRIs, gabapentin, or pregabalin (71). The U.K. National Institute for Health and Care Excellence guidelines on the management of neuropathic pain in nonspecialist settings proposed that duloxetine should be the first-line treatment with amitriptyline as an alternative, and pregabalin as a second-line treatment for painful DSPN (72). […] this recommendation of duloxetine as the first-line therapy was not based on efficacy but rather cost-effectiveness. More recently, the American Academy of Neurology recommended that pregabalin is “established as effective and should be offered for relief of [painful DSPN] (Level A evidence)” (73), whereas venlafaxine, duloxetine, amitriptyline, gabapentin, valproate, opioids, and capsaicin were considered to be “probably effective and should be considered for treatment of painful DSPN (Level B evidence)” (63). […] this recommendation was primarily based on achievement of greater than 80% completion rate of clinical trials, which in turn may be influenced by the length of the trials. […] the International Consensus Panel on Diabetic Neuropathy recommended TCAs, duloxetine, pregabalin, and gabapentin as first-line agents having carefully reviewed all the available literature regarding the pharmacological treatment of painful DSPN (65), the final drug choice tailored to the particular patient based on demographic profile and comorbidities. […] The initial selection of a particular first-line treatment will be influenced by the assessment of contraindications, evaluation of comorbidities […], and cost (65). […] caution is advised to start at lower than recommended doses and titrate gradually.”

ii. Sex Differences in All-Cause and Cardiovascular Mortality, Hospitalization for Individuals With and Without Diabetes, and Patients With Diabetes Diagnosed Early and Late.

“A challenge with type 2 diabetes is the late diagnosis of the disease because many individuals who meet the criteria are often asymptomatic. Approximately 183 million people, or half of those who have diabetes, are unaware they have the disease (1). Furthermore, type 2 diabetes can be present for 9 to 12 years before being diagnosed and, as a result, complications are often present at the time of diagnosis (3). […] Cardiovascular disease (CVD) is the most common comorbidity associated with diabetes, and with 50% of those with diabetes dying of CVD it is the most common cause of death (1). […] Newfoundland and Labrador has the highest age-standardized prevalence of diabetes in Canada (2), and the age-standardized mortality and hospitalization rates for CVD, AMI, and stroke are some of the highest in the country (21,22). A better understanding of mortality and hospitalizations associated with diabetes for males and females is important to support diabetes prevention and management. Therefore, the objectives of this study were to compare the risk of all-cause, CVD, AMI, and stroke mortality and hospitalizations for males and females with and without diabetes and those with early and late diagnoses of diabetes. […] We conducted a population-based retrospective cohort study including 73,783 individuals aged 25 years or older in Newfoundland and Labrador, Canada (15,152 with diabetes; 9,517 with late diagnoses). […] mean age at baseline was 60.1 years (SD, 14.3 years). […] Diabetes was classified as being diagnosed “early” and “late” depending on when diabetes-related comorbidities developed. Individuals early in the disease course would not have any diabetes-related comorbidities at the time of their case dates. On the contrary, a late-diagnosed diabetes patient would have comorbidities related to diabetes at the time of diagnosis.”

“For males, 20.5% (n = 7,751) had diabetes, whereas 20.6% (n = 7,401) of females had diabetes. […] Males and females with diabetes were more likely to die, to be younger at death, to have a shorter survival time, and to be admitted to the hospital than males and females without diabetes (P < 0.01). When admitted to the hospital, individuals with diabetes stayed longer than individuals without diabetes […] Both males and females with late diagnoses were significantly older at the time of diagnosis than those with early diagnoses […]. Males and females with late diagnoses of diabetes were more likely to be deceased at the end of the study period compared with those with early diagnoses […]. Those with early diagnoses were younger at death compared with those with late diagnoses (P < 0.01); however, median survival time for both males and females with early diagnoses was significantly longer than that of those with late diagnoses (P < 0.01). During the study period, males and females with late diabetes diagnoses were more likely to be hospitalized (P < 0.01) and have a longer length of hospital stay compared with those with early diagnoses (P < 0.01).”

“[T]he hospitalization results show that an early diagnosis […] increase the risk of all-cause, CVD, and AMI hospitalizations compared with individuals without diabetes. After adjusting for covariates, males with late diabetes diagnoses had an increased risk of all-cause and CVD mortality and hospitalizations compared with males without diabetes. Similar findings were found for females. A late diabetes diagnosis was positively associated with CVD mortality (HR 6.54 [95% CI 4.80–8.91]) and CVD hospitalizations (5.22 [4.31–6.33]) for females, and the risk was significantly higher compared with their male counterparts (3.44 [2.47–4.79] and 3.33 [2.80–3.95]).”

iii. Effect of Type 1 Diabetes on Carotid Structure and Function in Adolescents and Young Adults.

I may have discussed some of the results of this study before, but a search of the blog told me that I have not covered the study itself. I thought it couldn’t hurt to add a link and a few highlights here.

“Type 1 diabetes mellitus causes increased carotid intima-media thickness (IMT) in adults. We evaluated IMT in young subjects with type 1 diabetes. […] Participants with type 1 diabetes (N = 402) were matched to controls (N = 206) by age, sex, and race or ethnicity. Anthropometric and laboratory values, blood pressure, and IMT were measured.”

“Youth with type 1 diabetes had thicker bulb IMT, which remained significantly different after adjustment for demographics and cardiovascular risk factors. […] Because the rate of progression of IMT in healthy subjects (mean age, 40 years) in the Bogalusa Heart study was 0.017–0.020 mm/year (4), our difference of 0.016 mm suggests that our type 1 diabetic subjects had a vascular age 1 year advanced from their chronological age. […] adjustment for HbA1c ablated the case-control difference in IMT, suggesting that the thicker carotid IMT in the subjects with diabetes could be attributed to diabetes-related hyperglycemia.”

“In the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) study, progression of IMT over the course of 6 years was faster in subjects with type 1 diabetes, yielding a thicker final IMT in cases (5). There was no difference in IMT at baseline. However, DCCT/EDIC did not image the bulb, which is likely the earliest site of thickening according to the Bogalusa Heart Study […] Our analyses reinforce the importance of imaging the carotid bulb, often the site of earliest detectible subclinical atherosclerosis in youth. The DCCT/EDIC study demonstrated that the intensive treatment group had a slower progression of IMT (5) and that mean HbA1c levels explained most of the differences in IMT progression between treatment groups (12). One longitudinal study of youth found children with type 1 diabetes who had progression of IMT over the course of 2 years had higher HbA1c (13). Our data emphasize the role of diabetes-related hyperglycemia in increasing IMT in youth with type 1 diabetes. […] In summary, our study provides novel evidence that carotid thickness is increased in youth with type 1 diabetes compared with healthy controls and that this difference is not accounted for by traditional cardiovascular risk factors. Better control of diabetes-related hyperglycemia may be needed to reduce future cardiovascular disease.”

iv. Factors Associated With Microalbuminuria in 7,549 Children and Adolescents With Type 1 Diabetes in the T1D Exchange Clinic Registry.

“Elevated urinary albumin excretion is an early sign of diabetic kidney disease (DKD). The American Diabetes Association (ADA) recommends screening for microalbuminuria (MA) annually in people with type 1 diabetes after 10 years of age and 5 years of diabetes duration, with a diagnosis of MA requiring two of three tests to be abnormal (1). Early diagnosis of MA is important because effective treatments exist to limit the progression of DKD (1). However, although reduced rates of MA have been reported over the past few decades in some (24) but not all (5,6) studies, it has been suggested that the development of proteinuria has not been prevented but, rather, has been delayed by ∼10 years and that further improvements in care are needed (7).

Limited data exist on the frequency of a clinical diagnosis of MA in the pediatric population with type 1 diabetes in the U.S. Our aim was to use the data from the T1D Exchange clinic registry to assess factors associated with MA in 7,549 children and adolescents with type 1 diabetes.”

“The analysis cohort included 7,549 participants, with mean age of 13.8 ± 3.5 years (range 2 to 19), mean age at type 1 diabetes onset of 6.9 ± 3.9 years, and mean diabetes duration of 6.5 ± 3.7 years; 49% were female. The racial/ethnic distribution was 78% non-Hispanic white, 6% non-Hispanic black, 10% Hispanic, and 5% other. The average of all HbA1c levels (for up to the past 13 years) was 8.4 ± 1.3% (69 ± 13.7 mmol/mol) […]. MA was present in 329 of 7,549 (4.4%) participants, with a higher frequency associated with longer diabetes duration, higher mean glycosylated hemoglobin (HbA1c) level, older age, female sex, higher diastolic blood pressure (BP), and lower BMI […] increasing age [was] mainly associated with an increase in the frequency of MA when HbA1c was ≥9.5% (≥80 mmol/mol). […] MA was uncommon (<2%) among participants with HbA1c <7.5% (<58 mmol/mol). Of those with MA, only 36% were receiving ACEI/ARB treatment. […] Our results provide strong support for prior literature in emphasizing the importance of good glycemic and BP control, particularly as diabetes duration increases, in order to reduce the risk of DKD.

v. Secular Changes in the Age-Specific Prevalence of Diabetes Among U.S. Adults: 1988–2010.

“This study included 22,586 adults sampled in three periods of the National Health and Nutrition Examination Survey (1988–1994, 1999–2004, and 2005–2010). Diabetes was defined as having self-reported diagnosed diabetes or having a fasting plasma glucose level ≥126 mg/dL or HbA1c ≥6.5% (48 mmol/mol). […] The number of adults with diabetes increased by 75% from 1988–1994 to 2005–2010. After adjusting for sex, race/ethnicity, and education level, the prevalence of diabetes increased over the two decades across all age-groups. Younger adults (20–34 years of age) had the lowest absolute increase in diabetes prevalence of 1.0%, followed by middle-aged adults (35–64) at 2.7% and older adults (≥65) at 10.0% (all P < 0.001). Comparing 2005–2010 with 1988–1994, the adjusted prevalence ratios (PRs) by age-group were 2.3, 1.3, and 1.5 for younger, middle-aged, and older adults, respectively (all P < 0.05). After additional adjustment for body mass index (BMI), waist-to-height ratio (WHtR), or waist circumference (WC), the adjusted PR remained statistically significant only for adults ≥65 years of age.

CONCLUSIONS During the past two decades, the prevalence of diabetes increased across all age-groups, but adults ≥65 years of age experienced the largest increase in absolute change. Obesity, as measured by BMI, WHtR, or WC, was strongly associated with the increase in diabetes prevalence, especially in adults <65.”

The crude prevalence of diabetes changed from 8.4% (95% CI 7.7–9.1%) in 1988–1994 to 12.1% (11.3–13.1%) in 2005–2010, with a relative increase of 44.8% (28.3–61.3%) between the two survey periods. There was less change of prevalence of undiagnosed diabetes (P = 0.053). […] The estimated number (in millions) of adults with diabetes grew from 14.9 (95% CI 13.3–16.4) in 1988–1994 to 26.1 (23.8–28.3) in 2005–2010, resulting in an increase of 11.2 prevalent cases (a 75.5% [52.1–98.9%] increase). Younger adults contributed 5.5% (2.5–8.4%), middle-aged adults contributed 52.9% (43.4–62.3%), and older adults contributed 41.7% (31.9–51.4%) of the increased number of cases. In each survey time period, the number of adults with diabetes increased with age until ∼60–69 years; thereafter, it decreased […] the largest increase of cases occurred in middle-aged and older adults.”

vi. The Expression of Inflammatory Genes Is Upregulated in Peripheral Blood of Patients With Type 1 Diabetes.

“Although much effort has been devoted toward discoveries with respect to gene expression profiling in human T1D in the last decade (15), previous studies had serious limitations. Microarray-based gene expression profiling is a powerful discovery platform, but the results must be validated by an alternative technique such as real-time RT-PCR. Unfortunately, few of the previous microarray studies on T1D have been followed by a validation study. Furthermore, most previous gene expression studies had small sample sizes (<100 subjects in each group) that are not adequate for the human population given the expectation of large expression variations among individual subjects. Finally, the selection of appropriate reference genes for normalization of quantitative real-time PCR has a major impact on data quality. Most of the previous studies have used only a single reference gene for normalization. Ideally, gene transcription studies using real-time PCR should begin with the selection of an appropriate set of reference genes to obtain more reliable results (68).

We have previously carried out extensive microarray analysis and identified >100 genes with significantly differential expression between T1D patients and control subjects. Most of these genes have important immunological functions and were found to be upregulated in autoantibody-positive subjects, suggesting their potential use as predictive markers and involvement in T1D development (2). In this study, real-time RT-PCR was performed to validate a subset of the differentially expressed genes in a large sample set of 928 T1D patients and 922 control subjects. In addition to the verification of the gene expression associated with T1D, we also identified genes with significant expression changes in T1D patients with diabetes complications.

“Of the 18 genes analyzed here, eight genes […] had higher expression and three genes […] had lower expression in T1D patients compared with control subjects, indicating that genes involved in inflammation, immune regulation, and antigen processing and presentation are significantly altered in PBMCs from T1D patients. Furthermore, one adhesion molecule […] and three inflammatory genes mainly expressed by myeloid cells […] were significantly higher in T1D patients with complications (odds ratio [OR] 1.3–2.6, adjusted P value = 0.005–10−8), especially those patients with neuropathy (OR 4.8–7.9, adjusted P value <0.005). […] These findings suggest that inflammatory mediators secreted mainly by myeloid cells are implicated in T1D and its complications.

vii. Overexpression of Hemopexin in the Diabetic Eye – A new pathogenic candidate for diabetic macular edema.

“Diabetic retinopathy remains the leading cause of preventable blindness among working-age individuals in developed countries (1). Whereas proliferative diabetic retinopathy (PDR) is the commonest sight-threatening lesion in type 1 diabetes, diabetic macular edema (DME) is the primary cause of poor visual acuity in type 2 diabetes. Because of the high prevalence of type 2 diabetes, DME is the main cause of visual impairment in diabetic patients (2). When clinically significant DME appears, laser photocoagulation is currently indicated. However, the optimal period of laser treatment is frequently passed and, moreover, is not uniformly successful in halting visual decline. In addition, photocoagulation is not without side effects, with visual field loss and impairment of either adaptation or color vision being the most frequent. Intravitreal corticosteroids have been successfully used in eyes with persistent DME and loss of vision after the failure of conventional treatment. However, reinjections are commonly needed, and there are substantial adverse effects such as infection, glaucoma, and cataract formation. Intravitreal anti–vascular endothelial growth factor (VEGF) agents have also found an improvement of visual acuity and decrease of retinal thickness in DME, even in nonresponders to conventional treatment (3). However, apart from local side effects such as endophthalmitis and retinal detachment, the response to treatment of DME by VEGF blockade is not prolonged and is subject to significant variability. For all these reasons, new pharmacological treatments based on the understanding of the pathophysiological mechanisms of DME are needed.”

“Vascular leakage due to the breakdown of the blood-retinal barrier (BRB) is the main event involved in the pathogenesis of DME (4). However, little is known regarding the molecules primarily involved in this event. By means of a proteomic analysis, we have found that hemopexin was significantly increased in the vitreous fluid of patients with DME in comparison with PDR and nondiabetic control subjects (5). Hemopexin is the best characterized permeability factor in steroid-sensitive nephrotic syndrome (6,7). […] T cell–associated cytokines like tumor necrosis factor-α are able to enhance hemopexin production in mesangial cells in vitro, and this effect is prevented by corticosteroids (8). However, whether hemopexin also acts as a permeability factor in the BRB and its potential response to corticosteroids remains to be elucidated. […] the aims of the current study were 1) to compare hemopexin and hemopexin receptor (LDL receptor–related protein [LRP1]) levels in retina and in vitreous fluid from diabetic and nondiabetic patients, 2) to evaluate the effect of hemopexin on the permeability of outer and inner BRB in cell cultures, and 3) to determine whether anti-hemopexin antibodies and dexamethasone were able to prevent an eventual hemopexin-induced hyperpermeability.”

“In the current study, we […] confirmed our previous results obtained by a proteomic approach showing that hemopexin is higher in the vitreous fluid of diabetic patients with DME in comparison with diabetic patients with PDR and nondiabetic subjects. In addition, we provide the first evidence that hemopexin is overexpressed in diabetic eye. Furthermore, we have shown that hemopexin leads to the disruption of RPE [retinal pigment epithelium] cells, thus increasing permeability, and that this effect is prevented by dexamethasone. […] Our findings suggest that hemopexin can be considered a new candidate in the pathogenesis of DME and a new therapeutic target.”

viii. Relationship Between Overweight and Obesity With Hospitalization for Heart Failure in 20,985 Patients With Type 1 Diabetes.

“We studied patients with type 1 diabetes included in the Swedish National Diabetes Registry during 1998–2003, and they were followed up until hospitalization for HF, death, or 31 December 2009. Cox regression was used to estimate relative risks. […] Type 1 diabetes is defined in the NDR as receiving treatment with insulin only and onset at age 30 years or younger. These characteristics previously have been validated as accurate in 97% of cases (11). […] In a sample of 20,985 type 1 diabetic patients (mean age, 38.6 years; mean BMI, 25.0 kg/m2), 635 patients […] (3%) were admitted for a primary or secondary diagnosis of HF during a median follow-up of 9 years, with an incidence of 3.38 events per 1,000 patient-years (95% CI, 3.12–3.65). […] Cox regression adjusting for age, sex, diabetes duration, smoking, HbA1c, systolic and diastolic blood pressures, and baseline and intercurrent comorbidities (including myocardial infarction) showed a significant relationship between BMI and hospitalization for HF (P < 0.0001). In reference to patients in the BMI 20–25 kg/m2 category, hazard ratios (HRs) were as follows: HR 1.22 (95% CI, 0.83–1.78) for BMI <20 kg/m2; HR 0.94 (95% CI, 0.78–1.12) for BMI 25–30 kg/m2; HR 1.55 (95% CI, 1.20–1.99) for BMI 30–35 kg/m2; and HR 2.90 (95% CI, 1.92–4.37) for BMI ≥35 kg/m2.

CONCLUSIONS Obesity, particularly severe obesity, is strongly associated with hospitalization for HF in patients with type 1 diabetes, whereas no similar relation was present in overweight and low body weight.”

“In contrast to type 2 diabetes, obesity is not implicated as a causal factor in type 1 diabetes and maintaining normal weight is accordingly less of a focus in clinical practice of patients with type 1 diabetes. Because most patients with type 2 diabetes are overweight or obese and glucose levels can normalize in some patients after weight reduction, this is usually an important part of integrated diabetes care. Our findings indicate that given the substantial risk of cardiovascular disease in type 1 diabetic patients, it is crucial for clinicians to also address weight issues in type 1 diabetes. Because many patients are normal weight when diabetes is diagnosed, careful monitoring of weight with a view to maintaining normal weight is probably more essential than previously thought. Although overweight was not associated with an increased risk of HF, higher BMI levels probably increase the risk of future obesity. Our finding that 71% of patients with BMI >35 kg/m2 were women is potentially important, although this should be tested in other populations given that it could be a random finding. If not random, especially because the proportion was much higher than in the entire cohort (45%), then it may indicate that severe obesity is a greater problem in women than in men with type 1 diabetes.”

November 30, 2017 Posted by | Cardiology, Diabetes, Genetics, Nephrology, Neurology, Ophthalmology, Pharmacology, Studies | Leave a comment

Words

Most of these words are words which I encountered while reading the Jim Butcher books White Night, Small Favour, Turn Coat, and Changes.

Propitiate. Misericord. Skirling. Idiom. Cadge. Hapless. Roil. Kibble. Viridian. Kine. Shill. Steeple. Décolletage. Kukri. Rondure. Wee. Contrail. Servitor. Pastern. Fetlock.

Coterie. Crochet. Fibrillate. Knead. Divot. Avail. Tamale. Abalone. Cupola. Tuyere. Simulacrum. Bristle. Guff. Shimmy. Prow. Warble. Cannery. Twirl. Winch. Wheelhouse.

Teriyaki. Widdershins. Kibble. Slobber. Surcease. Amble. Invocation. Gasket. Chorale. Rivulet. Choker. Grimoire. Caduceus. Fussbudget. Pate. Scrunchie. Shamble. Ficus. Deposition. Grue.

Aliquot. Nape. Emanation. Atavistic. Menhir. Scrimshaw. Burble. Pauldron. Ornate. Stolid. Wry. Stamen. Ductwork. Speleothem. Philtrum. Hassock. Incipit. Planish. Rheology. Sinter.

 

November 29, 2017 Posted by | Books, Language | Leave a comment

Radioactivity

A few quotes from the book and some related links below. Here’s my very short goodreads review of the book.

Quotes:

“The main naturally occurring radionuclides of primordial origin are uranium-235, uranium-238, thorium-232, their decay products, and potassium-40. The average abundance of uranium, thorium, and potassium in the terrestrial crust is 2.6 parts per million, 10 parts per million, and 1% respectively. Uranium and thorium produce other radionuclides via neutron- and alpha-induced reactions, particularly deeply underground, where uranium and thorium have a high concentration. […] A weak source of natural radioactivity derives from nuclear reactions of primary and secondary cosmic rays with the atmosphere and the lithosphere, respectively. […] Accretion of extraterrestrial material, intensively exposed to cosmic rays in space, represents a minute contribution to the total inventory of radionuclides in the terrestrial environment. […] Natural radioactivity is [thus] mainly produced by uranium, thorium, and potassium. The total heat content of the Earth, which derives from this radioactivity, is 12.6 × 1024 MJ (one megajoule = 1 million joules), with the crust’s heat content standing at 5.4 × 1021 MJ. For comparison, this is significantly more than the 6.4 × 1013 MJ globally consumed for electricity generation during 2011. This energy is dissipated, either gradually or abruptly, towards the external layers of the planet, but only a small fraction can be utilized. The amount of energy available depends on the Earth’s geological dynamics, which regulates the transfer of heat to the surface of our planet. The total power dissipated by the Earth is 42 TW (one TW = 1 trillion watts): 8 TW from the crust, 32.3 TW from the mantle, 1.7 TW from the core. This amount of power is small compared to the 174,000 TW arriving to the Earth from the Sun.”

“Charged particles such as protons, beta and alpha particles, or heavier ions that bombard human tissue dissipate their energy locally, interacting with the atoms via the electromagnetic force. This interaction ejects electrons from the atoms, creating a track of electron–ion pairs, or ionization track. The energy that ions lose per unit path, as they move through matter, increases with the square of their charge and decreases linearly with their energy […] The energy deposited in the tissues and organs of your body by ionizing radiation is defined absorbed dose and is measured in gray. The dose of one gray corresponds to the energy of one joule deposited in one kilogram of tissue. The biological damage wrought by a given amount of energy deposited depends on the kind of ionizing radiation involved. The equivalent dose, measured in sievert, is the product of the dose and a factor w related to the effective damage induced into the living matter by the deposit of energy by specific rays or particles. For X-rays, gamma rays, and beta particles, a gray corresponds to a sievert; for neutrons, a dose of one gray corresponds to an equivalent dose of 5 to 20 sievert, and the factor w is equal to 5–20 (depending on the neutron energy). For protons and alpha particles, w is equal to 5 and 20, respectively. There is also another weighting factor taking into account the radiosensitivity of different organs and tissues of the body, to evaluate the so-called effective dose. Sometimes the dose is still quoted in rem, the old unit, with 100 rem corresponding to one sievert.”

“Neutrons emitted during fission reactions have a relatively high velocity. When still in Rome, Fermi had discovered that fast neutrons needed to be slowed down to increase the probability of their reaction with uranium. The fission reaction occurs with uranium-235. Uranium-238, the most common isotope of the element, merely absorbs the slow neutrons. Neutrons slow down when they are scattered by nuclei with a similar mass. The process is analogous to the interaction between two billiard balls in a head-on collision, in which the incoming ball stops and transfers all its kinetic energy to the second one. ‘Moderators’, such as graphite and water, can be used to slow neutrons down. […] When Fermi calculated whether a chain reaction could be sustained in a homogeneous mixture of uranium and graphite, he got a negative answer. That was because most neutrons produced by the fission of uranium-235 were absorbed by uranium-238 before inducing further fissions. The right approach, as suggested by Szilárd, was to use separated blocks of uranium and graphite. Fast neutrons produced by the splitting of uranium-235 in the uranium block would slow down, in the graphite block, and then produce fission again in the next uranium block. […] A minimum mass – the critical mass – is required to sustain the chain reaction; furthermore, the material must have a certain geometry. The fissile nuclides, capable of sustaining a chain reaction of nuclear fission with low-energy neutrons, are uranium-235 […], uranium-233, and plutonium-239. The last two don’t occur in nature but can be produced artificially by irradiating with neutrons thorium-232 and uranium-238, respectively – via a reaction called neutron capture. Uranium-238 (99.27%) is fissionable, but not fissile. In a nuclear weapon, the chain reaction occurs very rapidly, releasing the energy in a burst.”

“The basic components of nuclear power reactors, fuel, moderator, and control rods, are the same as in the first system built by Fermi, but the design of today’s reactors includes additional components such as a pressure vessel, containing the reactor core and the moderator, a containment vessel, and redundant and diverse safety systems. Recent technological advances in material developments, electronics, and information technology have further improved their reliability and performance. […] The moderator to slow down fast neutrons is sometimes still the graphite used by Fermi, but water, including ‘heavy water’ – in which the water molecule has a deuterium atom instead of a hydrogen atom – is more widely used. Control rods contain a neutron-absorbing material, such as boron or a combination of indium, silver, and cadmium. To remove the heat generated in the reactor core, a coolant – either a liquid or a gas – is circulating through the reactor core, transferring the heat to a heat exchanger or directly to a turbine. Water can be used as both coolant and moderator. In the case of boiling water reactors (BWRs), the steam is produced in the pressure vessel. In the case of pressurized water reactors (PWRs), the steam generator, which is the secondary side of the heat exchanger, uses the heat produced by the nuclear reactor to make steam for the turbines. The containment vessel is a one-metre-thick concrete and steel structure that shields the reactor.”

“Nuclear energy contributed 2,518 TWh of the world’s electricity in 2011, about 14% of the global supply. As of February 2012, there are 435 nuclear power plants operating in 31 countries worldwide, corresponding to a total installed capacity of 368,267 MW (electrical). There are 63 power plants under construction in 13 countries, with a capacity of 61,032 MW (electrical).”

“Since the first nuclear fusion, more than 60 years ago, many have argued that we need at least 30 years to develop a working fusion reactor, and this figure has stayed the same throughout those years.”

“[I]onizing radiation is […] used to improve many properties of food and other agricultural products. For example, gamma rays and electron beams are used to sterilize seeds, flour, and spices. They can also inhibit sprouting and destroy pathogenic bacteria in meat and fish, increasing the shelf life of food. […] More than 60 countries allow the irradiation of more than 50 kinds of foodstuffs, with 500,000 tons of food irradiated every year. About 200 cobalt-60 sources and more than 10 electron accelerators are dedicated to food irradiation worldwide. […] With the help of radiation, breeders can increase genetic diversity to make the selection process faster. The spontaneous mutation rate (number of mutations per gene, for each generation) is in the range 10-8–10-5. Radiation can increase this mutation rate to 10-5–10-2. […] Long-lived cosmogenic radionuclides provide unique methods to evaluate the ‘age’ of groundwaters, defined as the mean subsurface residence time after the isolation of the water from the atmosphere. […] Scientists can date groundwater more than a million years old, through chlorine-36, produced in the atmosphere by cosmic-ray reactions with argon.”

“Radionuclide imaging was developed in the 1950s using special systems to detect the emitted gamma rays. The gamma-ray detectors, called gamma cameras, use flat crystal planes, coupled to photomultiplier tubes, which send the digitized signals to a computer for image reconstruction. Images show the distribution of the radioactive tracer in the organs and tissues of interest. This method is based on the introduction of low-level radioactive chemicals into the body. […] More than 100 diagnostic tests based on radiopharmaceuticals are used to examine bones and organs such as lungs, intestines, thyroids, kidneys, the liver, and gallbladder. They exploit the fact that our organs preferentially absorb different chemical compounds. […] Many radiopharmaceuticals are based on technetium-99m (an excited state of technetium-99 – the ‘m’ stands for ‘metastable’ […]). This radionuclide is used for the imaging and functional examination of the heart, brain, thyroid, liver, and other organs. Technetium-99m is extracted from molybdenum-99, which has a much longer half-life and is therefore more transportable. It is used in 80% of the procedures, amounting to about 40,000 per day, carried out in nuclear medicine. Other radiopharmaceuticals include short-lived gamma-emitters such as cobalt-57, cobalt-58, gallium-67, indium-111, iodine-123, and thallium-201. […] Methods routinely used in medicine, such as X-ray radiography and CAT, are increasingly used in industrial applications, particularly in non-destructive testing of containers, pipes, and walls, to locate defects in welds and other critical parts of the structure.”

“Today, cancer treatment with radiation is generally based on the use of external radiation beams that can target the tumour in the body. Cancer cells are particularly sensitive to damage by ionizing radiation and their growth can be controlled or, in some cases, stopped. High-energy X-rays produced by a linear accelerator […] are used in most cancer therapy centres, replacing the gamma rays produced from cobalt-60. The LINAC produces photons of variable energy bombarding a target with a beam of electrons accelerated by microwaves. The beam of photons can be modified to conform to the shape of the tumour, which is irradiated from different angles. The main problem with X-rays and gamma rays is that the dose they deposit in the human tissue decreases exponentially with depth. A considerable fraction of the dose is delivered to the surrounding tissues before the radiation hits the tumour, increasing the risk of secondary tumours. Hence, deep-seated tumours must be bombarded from many directions to receive the right dose, while minimizing the unwanted dose to the healthy tissues. […] The problem of delivering the needed dose to a deep tumour with high precision can be solved using collimated beams of high-energy ions, such as protons and carbon. […] Contrary to X-rays and gamma rays, all ions of a given energy have a certain range, delivering most of the dose after they have slowed down, just before stopping. The ion energy can be tuned to deliver most of the dose to the tumour, minimizing the impact on healthy tissues. The ion beam, which does not broaden during the penetration, can follow the shape of the tumour with millimetre precision. Ions with higher atomic number, such as carbon, have a stronger biological effect on the tumour cells, so the dose can be reduced. Ion therapy facilities are [however] still very expensive – in the range of hundreds of millions of pounds – and difficult to operate.”

“About 50 million years ago, a global cooling trend took our planet from the tropical conditions at the beginning of the Tertiary to the ice ages of the Quaternary, when the Arctic ice cap developed. The temperature decrease was accompanied by a decrease in atmospheric CO2 from 2,000 to 300 parts per million. The cooling was probably caused by a reduced greenhouse effect and also by changes in ocean circulation due to plate tectonics. The drop in temperature was not constant as there were some brief periods of sudden warming. Ocean deep-water temperatures dropped from 12°C, 50 million years ago, to 6°C, 30 million years ago, according to archives in deep-sea sediments (today, deep-sea waters are about 2°C). […] During the last 2 million years, the mean duration of the glacial periods was about 26,000 years, while that of the warm periods – interglacials – was about 27,000 years. Between 2.6 and 1.1 million years ago, a full cycle of glacial advance and retreat lasted about 41,000 years. During the past 1.2 million years, this cycle has lasted 100,000 years. Stable and radioactive isotopes play a crucial role in the reconstruction of the climatic history of our planet”.

Links:

CUORE (Cryogenic Underground Observatory for Rare Events).
Borexino.
Lawrence Livermore National Laboratory.
Marie Curie. Pierre Curie. Henri Becquerel. Wilhelm Röntgen. Joseph Thomson. Ernest Rutherford. Hans Geiger. Ernest Marsden. Niels Bohr.
Ruhmkorff coil.
Electroscope.
Pitchblende (uraninite).
Mache.
Polonium. Becquerel.
Radium.
Alpha decay. Beta decay. Gamma radiation.
Plum pudding model.
Spinthariscope.
Robert Boyle. John Dalton. Dmitri Mendeleev. Frederick Soddy. James Chadwick. Enrico Fermi. Lise Meitner. Otto Frisch.
Periodic Table.
Exponential decay. Decay chain.
Positron.
Particle accelerator. Cockcroft-Walton generator. Van de Graaff generator.
Barn (unit).
Nuclear fission.
Manhattan Project.
Chernobyl disaster. Fukushima Daiichi nuclear disaster.
Electron volt.
Thermoluminescent dosimeter.
Silicon diode detector.
Enhanced geothermal system.
Chicago Pile Number 1. Experimental Breeder Reactor 1. Obninsk Nuclear Power Plant.
Natural nuclear fission reactor.
Gas-cooled reactor.
Generation I reactors. Generation II reactor. Generation III reactor. Generation IV reactor.
Nuclear fuel cycle.
Accelerator-driven subcritical reactor.
Thorium-based nuclear power.
Small, sealed, transportable, autonomous reactor.
Fusion power. P-p (proton-proton) chain reaction. CNO cycle. Tokamak. ITER (International Thermonuclear Experimental Reactor).
Sterile insect technique.
Phase-contrast X-ray imaging. Computed tomography (CT). SPECT (Single-photon emission computed tomography). PET (positron emission tomography).
Boron neutron capture therapy.
Radiocarbon dating. Bomb pulse.
Radioactive tracer.
Radithor. The Radiendocrinator.
Radioisotope heater unit. Radioisotope thermoelectric generator. Seebeck effect.
Accelerator mass spectrometry.
Atomic bombings of Hiroshima and Nagasaki. Treaty on the Non-Proliferation of Nuclear Weapons. IAEA.
Nuclear terrorism.
Swiss light source. Synchrotron.
Chronology of the universe. Stellar evolution. S-process. R-process. Red giant. Supernova. White dwarf.
Victor Hess. Domenico Pacini. Cosmic ray.
Allende meteorite.
Age of the Earth. History of Earth. Geomagnetic reversal. Uranium-lead dating. Clair Cameron Patterson.
Glacials and interglacials.
Taung child. Lucy. Ardi. Ardipithecus kadabba. Acheulean tools. Java Man. Ötzi.
Argon-argon dating. Fission track dating.

November 28, 2017 Posted by | Archaeology, Astronomy, Biology, Books, Cancer/oncology, Chemistry, Engineering, Geology, History, Medicine, Physics | Leave a comment

Isotopes

A decent book. Below some quotes and links.

“[A]ll mass spectrometers have three essential components — an ion source, a mass filter, and some sort of detector […] Mass spectrometers need to achieve high vacuum to allow the uninterrupted transmission of ions through the instrument. However, even high-vacuum systems contain residual gas molecules which can impede the passage of ions. Even at very high vacuum there will still be residual gas molecules in the vacuum system that present potential obstacles to the ion beam. Ions that collide with residual gas molecules lose energy and will appear at the detector at slightly lower mass than expected. This tailing to lower mass is minimized by improving the vacuum as much as possible, but it cannot be avoided entirely. The ability to resolve a small isotope peak adjacent to a large peak is called ‘abundance sensitivity’. A single magnetic sector TIMS has abundance sensitivity of about 1 ppm per mass unit at uranium masses. So, at mass 234, 1 ion in 1,000,000 will actually be 235U not 234U, and this will limit our ability to quantify the rare 234U isotope. […] AMS [accelerator mass spectrometry] instruments use very high voltages to achieve high abundance sensitivity. […] As I write this chapter, the human population of the world has recently exceeded seven billion. […] one carbon atom in 1012 is mass 14. So, detecting 14C is far more difficult than identifying a single person on Earth, and somewhat comparable to identifying an individual leaf in the Amazon rain forest. Such is the power of isotope ratio mass spectrometry.”

14C is produced in the Earth’s atmosphere by the interaction between nitrogen and cosmic ray neutrons that releases a free proton turning 147N into 146C in a process that we call an ‘n-p’ reaction […] Because the process is driven by cosmic ray bombardment, we call 14C a ‘cosmogenic’ isotope. The half-life of 14C is about 5,000 years, so we know that all the 14C on Earth is either cosmogenic or has been created by mankind through nuclear reactors and bombs — no ‘primordial’ 14C remains because any that originally existed has long since decayed. 14C is not the only cosmogenic isotope; 16O in the atmosphere interacts with cosmic radiation to produce the isotope 10Be (beryllium). […] The process by which a high energy cosmic ray particle removes several nucleons is called ‘spallation’. 10Be production from 16O is not restricted to the atmosphere but also occurs when cosmic rays impact rock surfaces. […] when cosmic rays hit a rock surface they don’t bounce off but penetrate the top 2 or 3 metres (m) — the actual ‘attenuation’ depth will vary for particles of different energy. Most of the Earth’s crust is made of silicate minerals based on bonds between oxygen and silicon. So, the same spallation process that produces 10Be in the atmosphere also occurs in rock surfaces. […] If we know the flux of cosmic rays impacting a surface, the rate of production of the cosmogenic isotopes with depth below the rock surface, and the rate of radioactive decay, it should be possible to convert the number of cosmogenic atoms into an exposure age. […] Rocks on Earth which are shielded from much of the cosmic radiation have much lower levels of isotopes like 10Be than have meteorites which, before they arrive on Earth, are exposed to the full force of cosmic radiation. […] polar scientists have used cores drilled through ice sheets in Antarctica and Greenland to compare 10Be at different depths and thereby reconstruct 10Be production through time. The 14C and 10Be records are closely correlated indicating the common response to changes in the cosmic ray flux.”

“[O]nce we have credible cosmogenic isotope production rates, […] there are two classes of applications, which we can call ‘exposure’ and ‘burial’ methodologies. Exposure studies simply measure the accumulation of the cosmogenic nuclide. Such studies are simplest when the cosmogenic nuclide is a stable isotope like 3He and 21Ne. These will just accumulate continuously as the sample is exposed to cosmic radiation. Slightly more complicated are cosmogenic isotopes that are radioactive […]. These isotopes accumulate through exposure but will also be destroyed by radioactive decay. Eventually, the isotopes achieve the condition known as ‘secular equilibrium’ where production and decay are balanced and no chronological information can be extracted. Secular equilibrium is achieved after three to four half-lives […] Imagine a boulder that has been transported from its place of origin to another place within a glacier — what we call a glacial erratic. While the boulder was deeply covered in ice, it would not have been exposed to cosmic radiation. Its cosmogenic isotopes will only have accumulated since the ice melted. So a cosmogenic isotope exposure age tells us the date at which the glacier retreated, and, by examining multiple erratics from different locations along the course of the glacier, allows us to construct a retreat history for the de-glaciation. […] Burial methodologies using cosmogenic isotopes work in situations where a rock was previously exposed to cosmic rays but is now located in a situation where it is shielded.”

“Cosmogenic isotopes are also being used extensively to recreate the seismic histories of tectonically active areas. Earthquakes occur when geological faults give way and rock masses move. A major earthquake is likely to expose new rock to the Earth’s surface. If the field geologist can identify rocks in a fault zone that (s)he is confident were brought to the surface in an earthquake, then a cosmogenic isotope exposure age would date the fault — providing, of course, that subsequent erosion can be ruled out or quantified. Precarious rocks are rock outcrops that could reasonably be expected to topple if subjected to a significant earthquake. Dating the exposed surface of precarious rocks with cosmogenic isotopes can reveal the amount of time that has elapsed since the last earthquake of a magnitude that would have toppled the rock. Constructing records of seismic history is not merely of academic interest; some of the world’s seismically active areas are also highly populated and developed.”

“One aspect of the natural decay series that acts in favour of the preservation of accurate age information is the fact that most of the intermediate isotopes are short-lived. For example, in both the U series the radon (Rn) isotopes, which might be expected to diffuse readily out of a mineral, have half-lives of only seconds or days, too short to allow significant losses. Some decay series isotopes though do have significantly long half-lives which offer the potential to be geochronometers in their own right. […] These techniques depend on the tendency of natural decay series to evolve towards a state of ‘secular equilibrium’ in which the activity of all species in the decay series is equal. […] at secular equilibrium, isotopes with long half-lives (i.e. small decay constants) will have large numbers of atoms whereas short-lived isotopes (high decay constants) will only constitute a relatively small number of atoms. Since decay constants vary by several orders of magnitude, so will the numbers of atoms of each isotope in the equilibrium decay series. […] Geochronological applications of natural decay series depend upon some process disrupting the natural decay series to introduce either a deficiency or an excess of an isotope in the series. The decay series will then gradually return to secular equilibrium and the geochronometer relies on measuring the extent to which equilibrium has been approached.”

“The ‘ring of fire’ volcanoes around the margin of the Pacific Ocean are a manifestation of subduction in which the oldest parts of the Pacific Ocean crust are being returned to the mantle below. The oldest parts of the Pacific Ocean crust are about 150 million years (Ma) old, with anything older having already disappeared into the mantle via subduction zones. The Atlantic Ocean doesn’t have a ring of fire because it is a relatively young ocean which started to form about 60 Ma ago, and its oldest rocks are not yet ready to form subduction zones. Thus, while continental crust persists for billions of years, oceanic crust is a relatively transient (in terms of geological time) phenomenon at the Earth’s surface.”

“Mantle rocks typically contain minerals such as olivine, pyroxene, spinel, and garnet. Unlike say ice, which melts to form water, mixtures of minerals do not melt in the proportions in which they occur in the rock. Rather, they undergo partial melting in which some minerals […] melt preferentially leaving a solid residue enriched in refractory minerals […]. We know this from experimentally melting mantle-like rocks in the laboratory, but also because the basalts produced by melting of the mantle are closer in composition to Ca-rich (clino-) pyroxene than to the olivine-rich rocks that dominate the solid pieces (or xenoliths) of mantle that are sometimes transferred to the surface by certain types of volcanic eruptions. […] Thirty years ago geologists fiercely debated whether the mantle was homogeneous or heterogeneous; mantle isotope geochemistry hasn’t yet elucidated all the details but it has put to rest the initial conundrum; Earth’s mantle is compositionally heterogeneous.”

Links:

Frederick Soddy.
Rutherford–Bohr model.
Isotopes of hydrogen.
Radioactive decay. Types of decay. Alpha decay. Beta decay. Electron capture decay. Branching fraction. Gamma radiation. Spontaneous fission.
Promethium.
Lanthanides.
Radiocarbon dating.
Hessel de Vries.
Dendrochronology.
Suess effect.
Bomb pulse.
Delta notation (non-wiki link).
Isotopic fractionation.
C3 carbon fixation. C4 carbon fixation.
Nitrogen-15 tracing.
Isotopes of strontium. Strontium isotope analysis.
Ötzi.
Mass spectrometry.
Geiger counter.
Townsend avalanche.
Gas proportional counter.
Scintillation detector.
Liquid scintillation spectometry. Photomultiplier tube.
Dynode.
Thallium-doped sodium iodide detectors. Semiconductor-based detectors.
Isotope separation (-enrichment).
Doubly labeled water.
Urea breath test.
Radiation oncology.
Brachytherapy.
Targeted radionuclide therapy.
Iodine-131.
MIBG scan.
Single-photon emission computed tomography.
Positron emission tomography.
Inductively coupled plasma (ICP) mass spectrometry.
Secondary ion mass spectrometry.
Faraday cup (-detector).
δ18O.
Stadials and interstadials. Oxygen isotope ratio cycle.
Insolation.
Gain and phase model.
Milankovitch cycles.
Perihelion and aphelion. Precession.
Equilibrium Clumped-Isotope Effects in Doubly Substituted Isotopologues of Ethane (non-wiki link).
Age of the Earth.
Uranium–lead dating.
Geochronology.
Cretaceous–Paleogene boundary.
Argon-argon dating.
Nuclear chain reaction. Critical mass.
Fukushima Daiichi nuclear disaster.
Natural nuclear fission reactor.
Continental crust. Oceanic crust. Basalt.
Core–mantle boundary.
Chondrite.
Ocean Island Basalt.
Isochron dating.

November 23, 2017 Posted by | Biology, Books, Botany, Chemistry, Geology, Medicine, Physics | Leave a comment

Promoting the unknown…

November 19, 2017 Posted by | Music | Leave a comment

Materials… (II)

Some more quotes and links:

“Whether materials are stiff and strong, or hard or weak, is the territory of mechanics. […] the 19th century continuum theory of linear elasticity is still the basis of much of modern solid mechanics. A stiff material is one which does not deform much when a force acts on it. Stiffness is quite distinct from strength. A material may be stiff but weak, like a piece of dry spaghetti. If you pull it, it stretches only slightly […], but as you ramp up the force it soon breaks. To put this on a more scientific footing, so that we can compare different materials, we might devise a test in which we apply a force to stretch a bar of material and measure the increase in length. The fractional change in length is the strain; and the applied force divided by the cross-sectional area of the bar is the stress. To check that it is Hookean, we double the force and confirm that the strain has also doubled. To check that it is truly elastic, we remove the force and check that the bar returns to the same length that it started with. […] then we calculate the ratio of the stress to the strain. This ratio is the Young’s modulus of the material, a quantity which measures its stiffness. […] While we are measuring the change in length of the bar, we might also see if there is a change in its width. It is not unreasonable to think that as the bar stretches it also becomes narrower. The Poisson’s ratio of the material is defined as the ratio of the transverse strain to the longitudinal strain (without the minus sign).”

“There was much argument between Cauchy and Lamé and others about whether there are two stiffness moduli or one. […] In fact, there are two stiffness moduli. One describes the resistance of a material to shearing and the other to compression. The shear modulus is the stiffness in distortion, for example in twisting. It captures the resistance of a material to changes of shape, with no accompanying change of volume. The compression modulus (usually called the bulk modulus) expresses the resistance to changes of volume (but not shape). This is what occurs as a cube of material is lowered deep into the sea, and is squeezed on all faces by the water pressure. The Young’s modulus [is] a combination of the more fundamental shear and bulk moduli, since stretching in one direction produces changes in both shape and volume. […] A factor of about 10,000 covers the useful range of Young’s modulus in engineering materials. The stiffness can be traced back to the forces acting between atoms and molecules in the solid state […]. Materials like diamond or tungsten with strong bonds are stiff in the bulk, while polymer materials with weak intermolecular forces have low stiffness.”

“In pure compression, the concept of ‘strength’ has no meaning, since the material cannot fail or rupture. But materials can and do fail in tension or in shear. To judge how strong a material is we can go back for example to the simple tension arrangement we used for measuring stiffness, but this time make it into a torture test in which the specimen is put on the rack. […] We find […] that we reach a strain at which the material stops being elastic and is permanently stretched. We have reached the yield point, and beyond this we have damaged the material but it has not failed. After further yielding, the bar may fail by fracture […]. On the other hand, with a bar of cast iron, there comes a point where the bar breaks, noisily and without warning, and without yield. This is a failure by brittle fracture. The stress at which it breaks is the tensile strength of the material. For the ductile material, the stress at which plastic deformation starts is the tensile yield stress. Both are measures of strength. It is in metals that yield and plasticity are of the greatest significance and value. In working components, yield provides a safety margin between small-strain elasticity and catastrophic rupture. […] plastic deformation is [also] exploited in making things from metals like steel and aluminium. […] A useful feature of plastic deformation in metals is that plastic straining raises the yield stress, particularly at lower temperatures.”

“Brittle failure is not only noisy but often scary. Engineers keep well away from it. An elaborate theory of fracture mechanics has been built up to help them avoid it, and there are tough materials to hand which do not easily crack. […] Since small cracks and flaws are present in almost any engineering component […], the trick is not to avoid cracks but to avoid long cracks which exceed [a] critical length. […] In materials which can yield, the tip stress can be relieved by plastic deformation, and this is a potent toughening mechanism in some materials. […] The trick of compressing a material to suppress cracking is a powerful way to toughen materials.”

“Hardness is a property which materials scientists think of in a particular and practical way. It tells us how well a material resists being damaged or deformed by a sharp object. That is useful information and it can be obtained easily. […] Soft is sometimes the opposite of hard […] But a different kind of soft is squidgy. […] In the soft box, we find many everyday materials […]. Some soft materials such as adhesives and lubricants are of great importance in engineering. For all of them, the model of a stiff crystal lattice provides no guidance. There is usually no crystal. The units are polymer chains, or small droplets of liquids, or small solid particles, with weak forces acting between them, and little structural organization. Structures when they exist are fragile. Soft materials deform easily when forces act on them […]. They sit as a rule somewhere between rigid solids and simple liquids. Their mechanical behaviour is dominated by various kinds of plasticity.”

“In pure metals, the resistivity is extremely low […] and a factor of ten covers all of them. […] the low resistivity (or, put another way, the high conductivity) arises from the existence of a conduction band in the solid which is only partly filled. Electrons in the conduction band are mobile and drift in an applied electric field. This is the electric current. The electrons are subject to some scattering from lattice vibrations which impedes their motion and generates an intrinsic resistance. Scattering becomes more severe as the temperature rises and the amplitude of the lattice vibrations becomes greater, so that the resistivity of metals increases with temperature. Scattering is further increased by microstructural heterogeneities, such as grain boundaries, lattice distortions, and other defects, and by phases of different composition. So alloys have appreciably higher resistivities than their pure parent metals. Adding 5 per cent nickel to iron doubles the resistivity, although the resistivities of the two pure metals are similar. […] Resistivity depends fundamentally on band structure. […] Plastics and rubbers […] are usually insulators. […] Electronically conducting plastics would have many uses, and some materials [e.g. this one] are now known. […] The electrical resistivity of many metals falls to exactly zero as they are cooled to very low temperatures. The critical temperature at which this happens varies, but for pure metallic elements it always lies below 10 K. For a few alloys, it is a little higher. […] Superconducting windings provide stable and powerful magnetic fields for magnetic resonance imaging, and many industrial and scientific uses.”

“A permanent magnet requires no power. Its magnetization has its origin in the motion of electrons in atoms and ions in the solid, but only a few materials have the favourable combination of quantum properties to give rise to useful ferromagnetism. […] Ferromagnetism disappears completely above the so-called Curie temperature. […] Below the Curie temperature, ferromagnetic alignment throughout the material can be established by imposing an external polarizing field to create a net magnetization. In this way a practical permanent magnet is made. The ideal permanent magnet has an intense magnetization (a strong field) which remains after the polarizing field is switched off. It can only be demagnetized by applying a strong polarizing field in the opposite direction: the size of this field is the coercivity of the magnet material. For a permanent magnet, it should be as high as possible. […] Permanent magnets are ubiquitous but more or less invisible components of umpteen devices. There are a hundred or so in every home […]. There are also important uses for ‘soft’ magnetic materials, in devices where we want the ferromagnetism to be temporary, not permanent. Soft magnets lose their magnetization after the polarizing field is removed […] They have low coercivity, approaching zero. When used in a transformer, such a soft ferromagnetic material links the input and output coils by magnetic induction. Ideally, the magnetization should reverse during every cycle of the alternating current to minimize energy losses and heating. […] Silicon transformer steels yielded large gains in efficiency in electrical power distribution when they were first introduced in the 1920s, and they remain pre-eminent.”

“At least 50 families of plastics are produced commercially today. […] These materials all consist of linear string molecules, most with simple carbon backbones, a few with carbon-oxygen backbones […] Plastics as a group are valuable because they are lightweight and work well in wet environments, and don’t go rusty. They are mostly unaffected by acids and salts. But they burn, and they don’t much like sunlight as the ultraviolet light can break the polymer backbone. Most commercial plastics are mixed with substances which make it harder for them to catch fire and which filter out the ultraviolet light. Above all, plastics are used because they can be formed and shaped so easily. The string molecule itself is held together by strong chemical bonds and is resilient, but the forces between the molecules are weak. So plastics melt at low temperatures to produce rather viscous liquids […]. And with modest heat and a little pressure, they can be injected into moulds to produce articles of almost any shape”.

“The downward cascade of high purity to adulterated materials in recycling is a kind of entropy effect: unmixing is thermodynamically hard work. But there is an energy-driven problem too. Most materials are thermodynamically unstable (or metastable) in their working environments and tend to revert to the substances from which they were made. This is well-known in the case of metals, and is the usual meaning of corrosion. The metals are more stable when combined with oxygen than uncombined. […] Broadly speaking, ceramic materials are more stable thermodynamically, since they already contain much oxygen in chemical combination. Even so, ceramics used in the open usually fall victim to some environmental predator. Often it is water that causes damage. Water steals sodium and potassium from glass surfaces by slow leaching. The surface shrinks and cracks, so the glass loses its transparency. […] Stones and bricks may succumb to the stresses of repeated freezing when wet; limestones decay also by the chemical action of sulfur and nitrogen gasses in polluted rainwater. Even buried archaeological pots slowly react with water in a remorseless process similar to that of rock weathering.”

Ashby plot.
Alan Arnold Griffith.
Creep (deformation).
Amontons’ laws of friction.
Viscoelasticity.
Internal friction.
Surfactant.
Dispersant.
Rheology.
Liquid helium.
Conductor. Insulator. Semiconductor. P-type -ll-. N-type -ll-.
Hall–Héroult process.
Cuprate.
Magnetostriction.
Snell’s law.
Chromatic aberration.
Dispersion (optics).
Dye.
Density functional theory.
Glass.
Pilkington float process.
Superalloy.
Ziegler–Natta catalyst.
Transistor.
Integrated circuit.
Negative-index metamaterial.
Auxetics.
Titanium dioxide.
Hyperfine structure (/-interactions).
Diamond anvil cell.
Synthetic rubber.
Simon–Ehrlich wager.
Sankey diagram.

November 16, 2017 Posted by | Books, Chemistry, Engineering, Physics | Leave a comment

A few diabetes papers of interest

i. Thirty Years of Research on the Dawn Phenomenon: Lessons to Optimize Blood Glucose Control in Diabetes.

“More than 30 years ago in Diabetes Care, Schmidt et al. (1) defined “dawn phenomenon,” the night-to-morning elevation of blood glucose (BG) before and, to a larger extent, after breakfast in subjects with type 1 diabetes (T1D). Shortly after, a similar observation was made in type 2 diabetes (T2D) (2), and the physiology of glucose homeostasis at night was studied in normal, nondiabetic subjects (35). Ever since the first description, the dawn phenomenon has been studied extensively with at least 187 articles published as of today (6). […] what have we learned from the last 30 years of research on the dawn phenomenon? What is the appropriate definition, the identified mechanism(s), the importance (if any), and the treatment of the dawn phenomenon in T1D and T2D?”

“Physiology of glucose homeostasis in normal, nondiabetic subjects indicates that BG and plasma insulin concentrations remain remarkably flat and constant overnight, with a modest, transient increase in insulin secretion just before dawn (3,4) to restrain hepatic glucose production (4) and prevent hyperglycemia. Thus, normal subjects do not exhibit the dawn phenomenon sensu strictiori because they secrete insulin to prevent it.

In T1D, the magnitude of BG elevation at dawn first reported was impressive and largely secondary to the decrease of plasma insulin concentration overnight (1), commonly observed with evening administration of NPH or lente insulins (8) (Fig. 1). Even in early studies with intravenous insulin by the “artificial pancreas” (Biostator) (2), plasma insulin decreased overnight because of progressive inactivation of insulin in the pump (9). This artifact exaggerated the dawn phenomenon, now defined as need for insulin to limit fasting hyperglycemia (2). When the overnight waning of insulin was prevented by continuous subcutaneous insulin infusion (CSII) […] or by the long-acting insulin analogs (LA-IAs) (8), it was possible to quantify the real magnitude of the dawn phenomenon — 15–25 mg/dL BG elevation from nocturnal nadir to before breakfast […]. Nocturnal spikes of growth hormone secretion are the most likely mechanism of the dawn phenomenon in T1D (13,14). The observation from early pioneering studies in T1D (1012) that insulin sensitivity is higher after midnight until 3 a.m. as compared to the period 4–8 a.m., soon translated into use of more physiological replacement of basal insulin […] to reduce risk of nocturnal hypoglycemia while targeting fasting near-normoglycemia”.

“In T2D, identification of diurnal changes in BG goes back decades, but only quite recently fasting hyperglycemia has been attributed to a transient increase in hepatic glucose production (both glycogenolysis and gluconeogenesis) at dawn in the absence of compensatory insulin secretion (1517). Monnier et al. (7) report on the overnight (interstitial) glucose concentration (IG), as measured by continuous ambulatory IG monitoring, in three groups of 248 subjects with T2D […] Importantly, the dawn phenomenon had an impact on mean daily IG and A1C (mean increase of 0.39% [4.3 mmol/mol]), which was independent of treatment. […] Two messages from the data of Monnier et al. (7) are important. First, the dawn phenomenon is confirmed as a frequent event across the heterogeneous population of T2D independent of (oral) treatment and studied in everyday life conditions, not only in the setting of specialized clinical research units. Second, the article reaffirms that the primary target of treatment in T2D is to reestablish near-normoglycemia before and after breakfast (i.e., to treat the dawn phenomenon) to lower mean daily BG and A1C (8). […] the dawn phenomenon induces hyperglycemia not only before, but, to a larger extent, after breakfast as well (7,18). Over the years, fasting (and postbreakfast) hyperglycemia in T2D worsens as result of progressively impaired pancreatic B-cell function on the background of continued insulin resistance primarily at dawn (8,1518) and independently of age (19). Because it is an early metabolic abnormality leading over time to the vicious circle of “hyperglycemia begets hyperglycemia” by glucotoxicity and lipotoxicity, the dawn phenomenon in T2D should be treated early and appropriately before A1C continues to increase (20).”

“Oral medications do not adequately control the dawn phenomenon, even when given in combination (7,18). […] The evening replacement of basal insulin, which abolishes the dawn phenomenon by restraining hepatic glucose production and lipolysis (21), is an effective treatment as it mimics the physiology of glucose homeostasis in normal, nondiabetic subjects (4). Early use of basal insulin in T2D is an add-on option treatment after failure of metformin to control A1C <7.0% (20). However, […] it would be wise to consider initiation of basal insulin […] before — not after — A1C has increased well beyond 7.0%, as usually it is done in practice currently.”

ii. Peripheral Neuropathy in Adolescents and Young Adults With Type 1 and Type 2 Diabetes From the SEARCH for Diabetes in Youth Follow-up Cohort.

“Diabetic peripheral neuropathy (DPN) is among the most distressing of all the chronic complications of diabetes and is a cause of significant disability and poor quality of life (4). Depending on the patient population and diagnostic criteria, the prevalence of DPN among adults with diabetes ranges from 30 to 70% (57). However, there are insufficient data on the prevalence and predictors of DPN among the pediatric population. Furthermore, early detection and good glycemic control have been proven to prevent or delay adverse outcomes associated with DPN (5,8,9). Near-normal control of blood glucose beginning as soon as possible after the onset of diabetes may delay the development of clinically significant nerve impairment (8,9). […] The American Diabetes Association (ADA) recommends screening for DPN in children and adolescents with type 2 diabetes at diagnosis and 5 years after diagnosis for those with type 1 diabetes, followed by annual evaluations thereafter, using simple clinical tests (10). Since subclinical signs of DPN may precede development of frank neuropathic symptoms, systematic, preemptive screening is required in order to identify DPN in its earliest stages.

There are various measures that can be used for the assessment of DPN. The Michigan Neuropathy Screening Instrument (MNSI) is a simple, sensitive, and specific tool for the screening of DPN (11). It was validated in large independent cohorts (12,13) and has been widely used in clinical trials and longitudinal cohort studies […] The aim of this pilot study was to provide preliminary estimates of the prevalence of and factors associated with DPN among children and adolescents with type 1 and type 2 diabetes.”

“A total of 399 youth (329 with type 1 and 70 with type 2 diabetes) participated in the pilot study. Youth with type 1 diabetes were younger (mean age 15.7 ± 4.3 years) and had a shorter duration of diabetes (mean duration 6.2 ± 0.9 years) compared with youth with type 2 diabetes (mean age 21.6 ± 4.1 years and mean duration 7.6 ± 1.8 years). Participants with type 2 diabetes had a higher BMI z score and waist circumference, were more likely to be smokers, and had higher blood pressure and lipid levels than youth with type 1 diabetes (all P < 0.001). A1C, however, did not significantly differ between the two groups (mean A1C 8.8 ± 1.8% [73 ± 2 mmol/mol] for type 1 diabetes and 8.5 ± 2.9% [72 ± 3 mmol/mol] for type 2 diabetes; P = 0.5) but was higher than that recommended by the ADA for this age-group (A1C ≤7.5%) (10). The prevalence of DPN (defined as the MNSIE score >2) was 8.2% among youth with type 1 diabetes and 25.7% among those with type 2 diabetes. […] Youth with DPN were older and had a longer duration of diabetes, greater central obesity (increased waist circumference), higher blood pressure, an atherogenic lipid profile (low HDL cholesterol and marginally high triglycerides), and microalbuminuria. A1C […] was not significantly different between those with and without DPN (9.0% ± 2.0 […] vs. 8.8% ± 2.1 […], P = 0.58). Although nearly 37% of youth with type 2 diabetes came from lower-income families with annual income <25,000 USD per annum (as opposed to 11% for type 1 diabetes), socioeconomic status was not significantly associated with DPN (P = 0.77).”

“In the unadjusted logistic regression model, the odds of having DPN was nearly four times higher among those with type 2 diabetes compared with youth with type 1 diabetes (odds ratio [OR] 3.8 [95% CI 1.9–7.5, P < 0.0001). This association was attenuated, but remained significant, after adjustment for age and sex (OR 2.3 [95% CI 1.1–5.0], P = 0.03). However, this association was no longer significant (OR 2.1 [95% CI 0.3–15.9], P = 0.47) when additional covariates […] were added to the model […] The loss of the association between diabetes type and DPN with addition of covariates in the fully adjusted model could be due to power loss, given the small number of youth with DPN in the sample, or indicative of stronger associations between these covariates and DPN such that conditioning on them eliminates the observed association between DPN and diabetes type.”

“The prevalence of DPN among type 1 diabetes youth in our pilot study is lower than that reported by Eppens et al. (15) among 1,433 Australian adolescents with type 1 diabetes assessed by thermal threshold testing and VPT (prevalence of DPN 27%; median age and duration 15.7 and 6.8 years, respectively). A much higher prevalence was also reported among Danish (62.5%) and Brazilian (46%) cohorts of type 1 diabetes youth (16,17) despite a younger age (mean age among Danish children 13.7 years and Brazilian cohort 12.9 years). The prevalence of DPN among youth with type 2 diabetes (26%) found in our study is comparable to that reported among the Australian cohort (21%) (15). The wide ranges in the prevalence estimates of DPN among the young cannot solely be attributed to the inherent racial/ethnic differences in this population but could potentially be due to the differing criteria and diagnostic tests used to define and characterize DPN.”

“In our study, the duration of diabetes was significantly longer among those with DPN, but A1C values did not differ significantly between the two groups, suggesting that a longer duration with its sustained impact on peripheral nerves is an important determinant of DPN. […] Cho et al. (22) reported an increase in the prevalence of DPN from 14 to 28% over 17 years among 819 Australian adolescents with type 1 diabetes aged 11–17 years at baseline, despite improvements in care and minor improvements in A1C (8.2–8.7%). The prospective Danish Study Group of Diabetes in Childhood also found no association between DPN (assessed by VPT) and glycemic control (23).”

“In conclusion, our pilot study found evidence that the prevalence of DPN in adolescents with type 2 diabetes approaches rates reported in adults with diabetes. Several CVD risk factors such as central obesity, elevated blood pressure, dyslipidemia, and microalbuminuria, previously identified as predictors of DPN among adults with diabetes, emerged as independent predictors of DPN in this young cohort and likely accounted for the increased prevalence of DPN in youth with type 2 diabetes.

iii. Disturbed Eating Behavior and Omission of Insulin in Adolescents Receiving Intensified Insulin Treatment.

“Type 1 diabetes appears to be a risk factor for the development of disturbed eating behavior (DEB) (1,2). Estimates of the prevalence of DEB among individuals with type 1 diabetes range from 10 to 49% (3,4), depending on methodological issues such as the definition and measurement of DEB. Some studies only report the prevalence of full-threshold diagnoses of anorexia nervosa, bulimia nervosa, and eating disorders not otherwise specified, whereas others also include subclinical eating disorders (1). […] Although different terminology complicates the interpretation of prevalence rates across studies, the findings are sufficiently robust to indicate that there is a higher prevalence of DEB in type 1 diabetes compared with healthy controls. A meta-analysis reported a three-fold increase of bulimia nervosa, a two-fold increase of eating disorders not otherwise specified, and a two-fold increase of subclinical eating disorders in patients with type 1 diabetes compared with controls (2). No elevated rates of anorexia nervosa were found.”

“When DEB and type 1 diabetes co-occur, rates of morbidity and mortality are dramatically increased. A Danish study of comorbid type 1 diabetes and anorexia nervosa showed that the crude mortality rate at 10-year follow-up was 2.5% for type 1 diabetes and 6.5% for anorexia nervosa, but the rate increased to 34.8% when occurring together (the standardized mortality rates were 4.06, 8.86, and 14.5, respectively) (9). The presence of DEB in general also can severely impair metabolic control and advance the onset of long-term diabetes complications (4). Insulin reduction or omission is an efficient weight loss strategy uniquely available to patients with type 1 diabetes and has been reported in up to 37% of patients (1012). Insulin restriction is associated with poorer metabolic control, and previous research has found that self-reported insulin restriction at baseline leads to a three-fold increased risk of mortality at 11-year follow-up (10).

Few population-based studies have specifically investigated the prevalence of and relationship between DEBs and insulin restriction. The generalizability of existing research remains limited by relatively small samples and a lack of males. Further, many studies have relied on generic measures of DEBs, which may not be appropriate for use in individuals with type 1 diabetes. The Diabetes Eating Problem Survey–Revised (DEPS-R) is a newly developed and diabetes-specific screening tool for DEBs. A recent study demonstrated satisfactory psychometric properties of the Norwegian version of the DEPS-R among children and adolescents with type 1 diabetes 11–19 years of age (13). […] This study aimed to assess young patients with type 1 diabetes to assess the prevalence of DEBs and frequency of insulin omission or restriction, to compare the prevalence of DEB between males and females across different categories of weight and age, and to compare the clinical features of participants with and without DEBs and participants who restrict and do not restrict insulin. […] The final sample consisted of 770 […] children and adolescents with type 1 diabetes 11–19 years of age. There were 380 (49.4%) males and 390 (50.6%) females.”

27.7% of female and 9% of male children and adolescents with type 1 diabetes receiving intensified insulin treatment scored above the predetermined cutoff on the DEPS-R, suggesting a level of disturbed eating that warrants further attention by treatment providers. […] Significant differences emerged across age and weight categories, and notable sex-specific trends were observed. […] For the youngest (11–13 years) and underweight (BMI <18.5) categories, the proportion of DEB was <10% for both sexes […]. Among females, the prevalence of DEB increased dramatically with age to ∼33% among 14 to 16 year olds and to nearly 50% among 17 to 19 year olds. Among males, the rate remained low at 7% for 14 to 16 year olds and doubled to ∼15% for 17 to 19 year olds.

A similar sex-specific pattern was detected across weight categories. Among females, the prevalence of DEB increased steadily and significantly from 9% among the underweight category to 23% for normal weight, 42% for overweight, and 53% for the obese categories, respectively. Among males, ∼6–7% of both the underweight and normal weight groups reported DEB, with rates increasing to ∼15% for both the overweight and obese groups. […] When separated by sex, females scoring above the cutoff on the DEPS-R had significantly higher HbA1c (9.2% [SD, 1.9]) than females scoring below the cutoff (8.4% [SD, 1.3]; P < 0.001). The same trend was observed among males (9.2% [SD, 1.6] vs. 8.4% [SD, 1.3]; P < 0.01). […] A total of 31.6% of the participants reported using less insulin and 6.9% reported skipping their insulin dose entirely at least occasionally after overeating. When assessing the sexes separately, we found that 36.8% of females reported restricting and 26.2% reported skipping insulin because of overeating. The rates for males were 9.4 and 4.5%, respectively.”

“The finding that DEBs are common in young patients with type 1 diabetes is in line with previous literature (2). However, because of different assessment methods and different definitions of DEB, direct comparison with other studies is complicated, especially because this is the first study to have used the DEPS-R in a prevalence study. However, two studies using the original DEPS have reported similar results, with 37.9% (23) and 53.8% (24) of the participants reporting engaging in unhealthy weight control practices. In our study, females scored significantly higher than males, which is not surprising given previous studies demonstrating an increased risk of development of DEB in nondiabetic females compared with males. In addition, the prevalence rates increased considerably by increasing age and weight. A relationship between eating pathology and older age and higher BMI also has been demonstrated in previous research conducted in both diabetic and nondiabetic adolescent populations.”

“Consistent with existent literature (1012,27), we found a high frequency of insulin restriction. For example, Bryden et al. (11) assessed 113 males and females (aged 17–25 years) with type 1 diabetes and found that a total of 37% of the females (no males) reported a history of insulin omission or reduction for weight control purposes. Peveler et al. (12) investigated 87 females with type 1 diabetes aged 11–25 years, and 36% reported intentionally reducing or omitting their insulin doses to control their weight. Finally, Goebel-Fabbri et al. (10) examined 234 females 13–60 years of age and found that 30% reported insulin restriction. Similarly, 36.8% of the participants in our study reported reducing their insulin doses occasionally or more often after overeating.”

iv. Clinical Inertia in People With Type 2 Diabetes. A retrospective cohort study of more than 80,000 people.

“Despite good-quality evidence of tight glycemic control, particularly early in the disease trajectory (3), people with type 2 diabetes often do not reach recommended glycemic targets. Baseline characteristics in observational studies indicate that both insulin-experienced and insulin-naïve people may have mean HbA1c above the recommended target levels, reflecting the existence of patients with poor glycemic control in routine clinical care (810). […] U.K. data, based on an analysis reflecting previous NICE guidelines, show that it takes a mean of 7.7 years to initiate insulin after the start of the last OAD [oral antidiabetes drugs] (in people taking two or more OADs) and that mean HbA1c is ~10% (86 mmol/mol) at the time of insulin initiation (12). […] This failure to intensify treatment in a timely manner has been termed clinical inertia; however, data are lacking on clinical inertia in the diabetes-management pathway in a real-world primary care setting, and studies that have been carried out are, relatively speaking, small in scale (13,14). This retrospective cohort analysis investigates time to intensification of treatment in people with type 2 diabetes treated with OADs and the associated levels of glycemic control, and compares these findings with recommended treatment guidelines for diabetes.”

“We used the Clinical Practice Research Datalink (CPRD) database. This is the world’s largest computerized database, representing the primary care longitudinal records of >13 million patients from across the U.K. The CPRD is representative of the U.K. general population, with age and sex distributions comparable with those reported by the U.K. National Population Census (15). All information collected in the CPRD has been subjected to validation studies and been proven to contain consistent and high-quality data (16).”

“50,476 people taking one OAD, 25,600 people taking two OADs, and 5,677 people taking three OADs were analyzed. Mean baseline HbA1c (the most recent measurement within 6 months before starting OADs) was 8.4% (68 mmol/mol), 8.8% (73 mmol/mol), and 9.0% (75 mmol/mol) in people taking one, two, or three OADs, respectively. […] In people with HbA1c ≥7.0% (≥53 mmol/mol) taking one OAD, median time to intensification with an additional OAD was 2.9 years, whereas median time to intensification with insulin was >7.2 years. Median time to insulin intensification in people with HbA1c ≥7.0% (≥53 mmol/mol) taking two or three OADs was >7.2 and >7.1 years, respectively. In people with HbA1c ≥7.5% or ≥8.0% (≥58 or ≥64 mmol/mol) taking one OAD, median time to intensification with an additional OAD was 1.9 or 1.6 years, respectively; median time to intensification with insulin was >7.1 or >6.9 years, respectively. In those people with HbA1c ≥7.5% or ≥8.0% (≥58 or ≥64 mmol/mol) and taking two OADs, median time to insulin was >7.2 and >6.9 years, respectively; and in those people taking three OADs, median time to insulin intensification was >6.1 and >6.0 years, respectively.”

“By end of follow-up, treatment of 17.5% of people with HbA1c ≥7.0% (≥53 mmol/mol) taking three OADs was intensified with insulin, treatment of 20.6% of people with HbA1c ≥7.5% (≥58 mmol/mol) taking three OADs was intensified with insulin, and treatment of 22.0% of people with HbA1c ≥8.0% (≥64 mmol/mol) taking three OADs was intensified with insulin. There were minimal differences in the proportion of patients intensified between the groups. […] In people taking one OAD, the probability of an additional OAD or initiation of insulin was 23.9% after 1 year, increasing to 48.7% by end of follow-up; in people taking two OADs, the probability of an additional OAD or initiation of insulin was 11.4% after 1 year, increasing to 30.1% after 2 years; and in people taking three OADs, the probability of an additional OAD or initiation of insulin was 5.7% after 1 year, increasing to 12.0% by the end of follow-up […] Mean ± SD HbA1c in patients taking one OAD was 8.7 ± 1.6% in those intensified with an additional OAD (n = 14,605), 9.4 ± 2.3% (n = 1,228) in those intensified with insulin, and 8.7 ± 1.7% (n = 15,833) in those intensified with additional OAD or insulin. Mean HbA1c in patients taking two OADs was 8.8 ± 1.5% (n = 3,744), 9.8 ± 1.9% (n = 1,631), and 9.1 ± 1.7% (n = 5,405), respectively. In patients taking three OADs, mean HbA1c at intensification with insulin was 9.7 ± 1.6% (n = 514).”

This analysis shows that there is a delay in intensifying treatment in people with type 2 diabetes with suboptimal glycemic control, with patients remaining in poor glycemic control for >7 years before intensification of treatment with insulin. In patients taking one, two, or three OADs, median time from initiation of treatment to intensification with an additional OAD for any patient exceeded the maximum follow-up time of 7.2–7.3 years, dependent on subcohort. […] Despite having HbA1c levels for which diabetes guidelines recommend treatment intensification, few people appeared to undergo intensification (4,6,7). The highest proportion of people with clinical inertia was for insulin initiation in people taking three OADs. Consequently, these people experienced prolonged periods in poor glycemic control, which is detrimental to long-term outcomes.”

“Previous studies in U.K. general practice have shown similar findings. A retrospective study involving 14,824 people with type 2 diabetes from 154 general practice centers contributing to the Doctors Independent Network Database (DIN-LINK) between 1995 and 2005 observed that median time to insulin initiation for people prescribed multiple OADs was 7.7 years (95% CI 7.4–8.5 years); mean HbA1c before insulin was 9.85% (84 mmol/mol), which decreased by 1.34% (95% CI 1.24–1.44%) after therapy (12). A longitudinal observational study from health maintenance organization data in 3,891 patients with type 2 diabetes in the U.S. observed that, despite continued HbA1c levels >7% (>53 mmol/mol), people treated with sulfonylurea and metformin did not start insulin for almost 3 years (21). Another retrospective cohort study, using data from the Health Improvement Network database of 2,501 people with type 2 diabetes, estimated that only 25% of people started insulin within 1.8 years of multiple OAD failure, if followed for 5 years, and that 50% of people delayed starting insulin for almost 5 years after failure of glycemic control with multiple OADs (22). The U.K. cohort of a recent, 26-week observational study examining insulin initiation in clinical practice reported a large proportion of insulin-naïve people with HbA1c >9% (>75 mmol/mol) at baseline (64%); the mean HbA1c in the global cohort was 8.9% (74 mmol/mol) (10). Consequently, our analysis supports previous findings concerning clinical inertia in both U.K. and U.S. general practice and reflects little improvement in recent years, despite updated treatment guidelines recommending tight glycemic control.

v. Small- and Large-Fiber Neuropathy After 40 Years of Type 1 Diabetes. Associations with glycemic control and advanced protein glycation: the Oslo Study.

“How hyperglycemia may cause damage to the nervous system is not fully understood. One consequence of hyperglycemia is the generation of advanced glycation end products (AGEs) that can form nonenzymatically between glucose, lipids, and amino groups. It is believed that AGEs are involved in the pathophysiology of neuropathy. AGEs tend to affect cellular function by altering protein function (11). One of the AGEs, N-ε-(carboxymethyl)lysine (CML), has been found in excessive amounts in the human diabetic peripheral nerve (12). High levels of methylglyoxal in serum have been found to be associated with painful peripheral neuropathy (13). In recent years, differentiation of affected nerves is possible by virtue of specific function tests to distinguish which fibers are damaged in diabetic polyneuropathy: large myelinated (Aα, Aβ), small thinly myelinated (Aδ), or small nonmyelinated (C) fibers. […] Our aims were to evaluate large- and small-nerve fiber function in long-term type 1 diabetes and to search for longitudinal associations with HbA1c and the AGEs CML and methylglyoxal-derived hydroimidazolone.”

“27 persons with type 1 diabetes of 40 ± 3 years duration underwent large-nerve fiber examinations, with nerve conduction studies at baseline and years 8, 17, and 27. Small-fiber functions were assessed by quantitative sensory thresholds (QST) and intraepidermal nerve fiber density (IENFD) at year 27. HbA1c was measured prospectively through 27 years. […] Fourteen patients (52%) reported sensory symptoms. Nine patients reported symptoms of a sensory neuropathy (reduced sensibility in feet or impaired balance), while three of these patients described pain. Five patients had symptoms compatible with carpal tunnel syndrome (pain or paresthesias within the innervation territory of the median nerve […]. An additional two had no symptoms but abnormal neurological tests with absent tendon reflexes and reduced sensibility. A total of 16 (59%) of the patients had symptoms or signs of neuropathy. […] No patient with symptoms of neuropathy had normal neurophysiological findings. […] Abnormal autonomic testing was observed in 7 (26%) of the patients and occurred together with neurophysiological signs of peripheral neuropathy. […] Twenty-two (81%) had small-fiber dysfunction by QST. Heat pain thresholds in the foot were associated with hydroimidazolone and HbA1c. IENFD was abnormal in 19 (70%) and significantly lower in diabetic patients than in age-matched control subjects (4.3 ± 2.3 vs. 11.2 ± 3.5 mm, P < 0.001). IENFD correlated negatively with HbA1c over 27 years (r = −0.4, P = 0.04) and CML (r = −0.5, P = 0.01). After adjustment for age, height, and BMI in a multiple linear regression model, CML was still independently associated with IENFD.”

Our study shows that small-fiber dysfunction is more prevalent than large-fiber dysfunction in diabetic neuropathy after long duration of type 1 diabetes. Although large-fiber abnormalities were less common than small-fiber abnormalities, almost 60% of the participants had their large nerves affected after 40 years with diabetes. Long-term blood glucose estimated by HbA1c measured prospectively through 27 years and AGEs predict large- and small-nerve fiber function.”

vi. Subarachnoid Hemorrhage in Type 1 Diabetes. A prospective cohort study of 4,083 patients with diabetes.

“Subarachnoid hemorrhage (SAH) is a life-threatening cerebrovascular event, which is usually caused by a rupture of a cerebrovascular aneurysm. These aneurysms are mostly found in relatively large-caliber (≥1 mm) vessels and can often be considered as macrovascular lesions. The overall incidence of SAH has been reported to be 10.3 per 100,000 person-years (1), even though the variation in incidence between countries is substantial (1). Notably, the population-based incidence of SAH is 35 per 100,000 person-years in the adult (≥25 years of age) Finnish population (2). The incidence of nonaneurysmal SAH is globally unknown, but it is commonly believed that 5–15% of all SAHs are of nonaneurysmal origin. Prospective, long-term, population-based SAH risk factor studies suggest that smoking (24), high blood pressure (24), age (2,3), and female sex (2,4) are the most important risk factors for SAH, whereas diabetes (both types 1 and 2) does not appear to be associated with an increased risk of SAH (2,3).

An increased risk of cardiovascular disease is well recognized in people with diabetes. There are, however, very few studies on the risk of cerebrovascular disease in type 1 diabetes since most studies have focused on type 2 diabetes alone or together with type 1 diabetes. Cerebrovascular mortality in the 20–39-year age-group of people with type 1 diabetes is increased five- to sevenfold in comparison with the general population but accounts only for 15% of all cardiovascular deaths (5). Of the cerebrovascular deaths in patients with type 1 diabetes, 23% are due to hemorrhagic strokes (5). However, the incidence of SAH in type 1 diabetes is unknown. […] In this prospective cohort study of 4,083 patients with type 1 diabetes, we aimed to determine the incidence and characteristics of SAH.”

“52% [of participants] were men, the mean age was 37.4 ± 11.8 years, and the duration of diabetes was 21.6 ± 12.1 years at enrollment. The FinnDiane Study is a nationwide multicenter cohort study of genetic, clinical, and environmental risk factors for microvascular and macrovascular complications in type 1 diabetes. […] all type 1 diabetic patients in the FinnDiane database with follow-up data and without a history of stroke at baseline were included. […] Fifteen patients were confirmed to have an SAH, and thus the crude incidence of SAH was 40.9 (95% CI 22.9–67.4) per 100,000 person-years. Ten out of these 15 SAHs were nonaneurysmal SAHs […] The crude incidence of nonaneurysmal SAH was 27.3 (13.1–50.1) per 100,000 person-years. None of the 10 nonaneurysmal SAHs were fatal. […] Only 3 out of 10 patients did not have verified diabetic microvascular or macrovascular complications prior to the nonaneurysmal SAH event. […] Four patients with type 1 diabetes had a fatal SAH, and all these patients died within 24 h after SAH.”

The presented study results suggest that the incidence of nonaneurysmal SAH is high among patients with type 1 diabetes. […] It is of note that smoking type 1 diabetic patients had a significantly increased risk of nonaneurysmal and all-cause SAHs. Smoking also increases the risk of microvascular complications in insulin-treated diabetic patients, and these patients more often have retinal and renal microangiopathy than never-smokers (8). […] Given the high incidence of nonaneurysmal SAH in patients with type 1 diabetes and microvascular changes (i.e., diabetic retinopathy and nephropathy), the results support the hypothesis that nonaneurysmal SAH is a microvascular rather than macrovascular subtype of stroke.”

“Only one patient with type 1 diabetes had a confirmed aneurysmal SAH. Four other patients died suddenly due to an SAH. If these four patients with type 1 diabetes and a fatal SAH had an aneurysmal SAH, which, taking into account the autopsy reports and imaging findings, is very likely, aneurysmal SAH may be an exceptionally deadly event in type 1 diabetes. Population-based evidence suggests that up to 45% of people die during the first 30 days after SAH, and 18% die at emergency rooms or outside hospitals (9). […] Contrary to aneurysmal SAH, nonaneurysmal SAH is virtually always a nonfatal event (1014). This also supports the view that nonaneurysmal SAH is a disease of small intracranial vessels, i.e., a microvascular disease. Diabetic retinopathy, a chronic microvascular complication, has been associated with an increased risk of stroke in patients with diabetes (15,16). Embryonically, the retina is an outgrowth of the brain and is similar in its microvascular properties to the brain (17). Thus, it has been suggested that assessments of the retinal vasculature could be used to determine the risk of cerebrovascular diseases, such as stroke […] Most interestingly, the incidence of nonaneurysmal SAH was at least two times higher than the incidence of aneurysmal SAH in type 1 diabetic patients. In comparison, the incidence of nonaneurysmal SAH is >10 times lower than the incidence of aneurysmal SAH in the general adult population (21).”

vii. HbA1c and the Risks for All-Cause and Cardiovascular Mortality in the General Japanese Population.

Keep in mind when looking at these data that this is type 2 data. Type 1 diabetes is very rare in Japan and the rest of East Asia.

“The risk for cardiovascular death was evaluated in a large cohort of participants selected randomly from the overall Japanese population. A total of 7,120 participants (2,962 men and 4,158 women; mean age 52.3 years) free of previous CVD were followed for 15 years. Adjusted hazard ratios (HRs) and 95% CIs among categories of HbA1c (<5.0%, 5.0–5.4%, 5.5–5.9%, 6.0–6.4%, and ≥6.5%) for participants without treatment for diabetes and HRs for participants with diabetes were calculated using a Cox proportional hazards model.

RESULTS During the study, there were 1,104 deaths, including 304 from CVD, 61 from coronary heart disease, and 127 from stroke (78 from cerebral infarction, 25 from cerebral hemorrhage, and 24 from unclassified stroke). Relations to HbA1c with all-cause mortality and CVD death were graded and continuous, and multivariate-adjusted HRs for CVD death in participants with HbA1c 6.0–6.4% and ≥6.5% were 2.18 (95% CI 1.22–3.87) and 2.75 (1.43–5.28), respectively, compared with participants with HbA1c <5.0%. Similar associations were observed between HbA1c and death from coronary heart disease and death from cerebral infarction.

CONCLUSIONS High HbA1c levels were associated with increased risk for all-cause mortality and death from CVD, coronary heart disease, and cerebral infarction in general East Asian populations, as in Western populations.”

November 15, 2017 Posted by | Cardiology, Diabetes, Epidemiology, Medicine, Neurology, Pharmacology, Studies | Leave a comment

Materials (I)…

Useful matter is a good definition of materials. […] Materials are materials because inventive people find ingenious things to do with them. Or just because people use them. […] Materials science […] explains how materials are made and how they behave as we use them.”

I recently read this book, which I liked. Below I have added some quotes from the first half of the book, with some added hopefully helpful links, as well as a collection of links at the bottom of the post to other topics covered.

“We understand all materials by knowing about composition and microstructure. Despite their extraordinary minuteness, the atoms are the fundamental units, and they are real, with precise attributes, not least size. Solid materials tend towards crystallinity (for the good thermodynamic reason that it is the arrangement of lowest energy), and they usually achieve it, though often in granular, polycrystalline forms. Processing conditions greatly influence microstructures which may be mobile and dynamic, particularly at high temperatures. […] The idea that we can understand materials by looking at their internal structure in finer and finer detail goes back to the beginnings of microscopy […]. This microstructural view is more than just an important idea, it is the explanatory framework at the core of materials science. Many other concepts and theories exist in materials science, but this is the framework. It says that materials are intricately constructed on many length-scales, and if we don’t understand the internal structure we shall struggle to explain or to predict material behaviour.”

“Oxygen is the most abundant element in the earth’s crust and silicon the second. In nature, silicon occurs always in chemical combination with oxygen, the two forming the strong Si–O chemical bond. The simplest combination, involving no other elements, is silica; and most grains of sand are crystals of silica in the form known as quartz. […] The quartz crystal comes in right- and left-handed forms. Nothing like this happens in metals but arises frequently when materials are built from molecules and chemical bonds. The crystal structure of quartz has to incorporate two different atoms, silicon and oxygen, each in a repeating pattern and in the precise ratio 1:2. There is also the severe constraint imposed by the Si–O chemical bonds which require that each Si atom has four O neighbours arranged around it at the corners of a tetrahedron, every O bonded to two Si atoms. The crystal structure which quartz adopts (which of all possibilities is the one of lowest energy) is made up of triangular and hexagonal units. But within this there are buried helixes of Si and O atoms, and a helix must be either right- or left-handed. Once a quartz crystal starts to grow as right- or left-handed, its structure templates all the other helices with the same handedness. Equal numbers of right- and left-handed crystals occur in nature, but each is unambiguously one or the other.”

“In the living tree, and in the harvested wood that we use as a material, there is a hierarchy of structural levels, climbing all the way from the molecular to the scale of branch and trunk. The stiff cellulose chains are bundled into fibrils, which are themselves bonded by other organic molecules to build the walls of cells; which in turn form channels for the transport of water and nutrients, the whole having the necessary mechanical properties to support its weight and to resist the loads of wind and rain. In the living tree, the structure allows also for growth and repair. There are many things to be learned from biological materials, but the most universal is that biology builds its materials at many structural levels, and rarely makes a distinction between the material and the organism. Being able to build materials with hierarchical architectures is still more or less out of reach in materials engineering. Understanding how materials spontaneously self-assemble is the biggest challenge in contemporary nanotechnology.”

“The example of diamond shows two things about crystalline materials. First, anything we know about an atom and its immediate environment (neighbours, distances, angles) holds for every similar atom throughout a piece of material, however large; and second, everything we know about the unit cell (its size, its shape, and its symmetry) also applies throughout an entire crystal […] and by extension throughout a material made of a myriad of randomly oriented crystallites. These two general propositions provide the basis and justification for lattice theories of material behaviour which were developed from the 1920s onwards. We know that every solid material must be held together by internal cohesive forces. If it were not, it would fly apart and turn into a gas. A simple lattice theory says that if we can work out what forces act on the atoms in one unit cell, then this should be enough to understand the cohesion of the entire crystal. […] In lattice models which describe the cohesion and dynamics of the atoms, the role of the electrons is mainly in determining the interatomic bonding and the stiffness of the bond-spring. But in many materials, and especially in metals and semiconductors, some of the electrons are free to move about within the lattice. A lattice model of electron behaviour combines a geometrical description of the lattice with a more or less mechanical view of the atomic cores, and a fully quantum theoretical description of the electrons themselves. We need only to take account of the outer electrons of the atoms, as the inner electrons are bound tightly into the cores and are not itinerant. The outer electrons are the ones that form chemical bonds, so they are also called the valence electrons.”

“It is harder to push atoms closer together than to pull them further apart. While atoms are soft on the outside, they have harder cores, and pushed together the cores start to collide. […] when we bring a trillion atoms together to form a crystal, it is the valence electrons that are disturbed as the atoms approach each other. As the atomic cores come close to the equilibrium spacing of the crystal, the electron states of the isolated atoms morph into a set of collective states […]. These collective electron states have a continuous distribution of energies up to a top level, and form a ‘band’. But the separation of the valence electrons into distinct electron-pair states is preserved in the band structure, so that we find that the collective states available to the entire population of valence electrons in the entire crystal form a set of bands […]. Thus in silicon, there are two main bands.”

“The perfect crystal has atoms occupying all the positions prescribed by the geometry of its crystal lattice. But real crystalline materials fall short of perfection […] For instance, an individual site may be unoccupied (a vacancy). Or an extra atom may be squeezed into the crystal at a position which is not a lattice position (an interstitial). An atom may fall off its lattice site, creating a vacancy and an interstitial at the same time. Sometimes a site is occupied by the wrong kind of atom. Point defects of this kind distort the crystal in their immediate neighbourhood. Vacancies free up diffusional movement, allowing atoms to hop from site to site. Larger scale defects invariably exist too. A complete layer of atoms or unit cells may terminate abruptly within the crystal to produce a line defect (a dislocation). […] There are materials which try their best to crystallize, but find it hard to do so. Many polymer materials are like this. […] The best they can do is to form small crystalline regions in which the molecules lie side by side over limited distances. […] Often the crystalline domains comprise about half the material: it is a semicrystal. […] Crystals can be formed from the melt, from solution, and from the vapour. All three routes are used in industry and in the laboratory. As a rule, crystals that grow slowly are good crystals. Geological time can give wonderful results. Often, crystals are grown on a seed, a small crystal of the same material deliberately introduced into the crystallization medium. If this is a melt, the seed can gradually be pulled out, drawing behind it a long column of new crystal material. This is the Czochralski process, an important method for making semiconductors. […] However it is done, crystals invariably grow by adding material to the surface of a small particle to make it bigger.”

“As we go down the Periodic Table of elements, the atoms get heavier much more quickly than they get bigger. The mass of a single atom of uranium at the bottom of the Table is about 25 times greater than that of an atom of the lightest engineering metal, beryllium, at the top, but its radius is only 40 per cent greater. […] The density of solid materials of every kind is fixed mainly by where the constituent atoms are in the Periodic Table. The packing arrangement in the solid has only a small influence, although the crystalline form of a substance is usually a little denser than the amorphous form […] The range of solid densities available is therefore quite limited. At the upper end we hit an absolute barrier, with nothing denser than osmium (22,590 kg/m3). At the lower end we have some slack, as we can make lighter materials by the trick of incorporating holes to make foams and sponges and porous materials of all kinds. […] in the entire catalogue of available materials there is a factor of about a thousand for ingenious people to play with, from say 20 to 20,000 kg/m3.”

“The expansion of materials as we increase their temperature is a universal tendency. It occurs because as we raise the temperature the thermal energy of the atoms and molecules increases correspondingly, and this fights against the cohesive forces of attraction. The mean distance of separation between atoms in the solid (or the liquid) becomes larger. […] As a general rule, the materials with small thermal expansivities are metals and ceramics with high melting temperatures. […] Although thermal expansion is a smooth process which continues from the lowest temperatures to the melting point, it is sometimes interrupted by sudden jumps […]. Changes in crystal structure at precise temperatures are commonplace in materials of all kinds. […] There is a cluster of properties which describe the thermal behaviour of materials. Besides the expansivity, there is the specific heat, and also the thermal conductivity. These properties show us, for example, that it takes about four times as much energy to increase the temperature of 1 kilogram of aluminium by 1°C as 1 kilogram of silver; and that good conductors of heat are usually also good conductors of electricity. At everyday temperatures there is not a huge difference in specific heat between materials. […] In all crystalline materials, thermal conduction arises from the diffusion of phonons from hot to cold regions. As they travel, the phonons are subject to scattering both by collisions with other phonons, and with defects in the material. This picture explains why the thermal conductivity falls as temperature rises”.

 

Materials science.
Metals.
Inorganic compound.
Organic compound.
Solid solution.
Copper. Bronze. Brass. Alloy.
Electrical conductivity.
Steel. Bessemer converter. Gamma iron. Alpha iron. Cementite. Martensite.
Phase diagram.
Equation of state.
Calcite. Limestone.
Birefringence.
Portland cement.
Cellulose.
Wood.
Ceramic.
Mineralogy.
Crystallography.
Laue diffraction pattern.
Silver bromide. Latent image. Photographic film. Henry Fox Talbot.
Graphene. Graphite.
Thermal expansion.
Invar.
Dulong–Petit law.
Wiedemann–Franz law.

 

November 14, 2017 Posted by | Biology, Books, Chemistry, Engineering, Physics | Leave a comment

Common Errors in Statistics… (III)

This will be my last post about the book. I liked most of it, and I gave it four stars on goodreads, but that doesn’t mean there weren’t any observations included in the book with which I took issue/disagreed. Here’s one of the things I didn’t like:

“In the univariate [model selection] case, if the errors were not normally distributed, we could take advantage of permutation methods to obtain exact significance levels in tests of the coefficients. Exact permutation methods do not exist in the multivariable case.

When selecting variables to incorporate in a multivariable model, we are forced to perform repeated tests of hypotheses, so that the resultant p-values are no longer meaningful. One solution, if sufficient data are available, is to divide the dataset into two parts, using the first part to select variables, and the second part to test these same variables for significance.” (chapter 13)

The basic idea is to use the results of hypothesis tests to decide which variables to include in the model. This is both common- and bad practice. I found it surprising that such a piece of advice would be included in this book, as I’d figured beforehand that this would precisely be the sort of thing a book like this one would tell people not to do. I’ve said this before multiple times on this blog, but I’ll keep saying it, especially if/when I find this sort of advice in statistics textbooks: Using hypothesis testing as a basis for model selection is an invalid approach to model selection, and it’s in general a terrible idea. “There is no statistical theory that supports the notion that hypothesis testing with a fixed α level is a basis for model selection.” (Burnham & Anderson). Use information criteria, not hypothesis tests, to make your model selection decisions. (And read Burnham & Anderson’s book on these topics.)

Anyway, much of the stuff included in the book was good stuff and it’s a very decent book. I’ve added some quotes and observations from the last part of the book below.

“OLS is not the only modeling technique. To diminish the effect of outliers, and treat prediction errors as proportional to their absolute magnitude rather than their squares, one should use least absolute deviation (LAD) regression. This would be the case if the conditional distribution of the dependent variable were characterized by a distribution with heavy tails (compared to the normal distribution, increased probability of values far from the mean). One should also employ LAD regression when the conditional distribution of the dependent variable given the predictors is not symmetric and we wish to estimate its median rather than its mean value.
If it is not clear which variable should be viewed as the predictor and which the dependent variable, as is the case when evaluating two methods of measurement, then one should employ Deming or error in variable (EIV) regression.
If one’s primary interest is not in the expected value of the dependent variable but in its extremes (the number of bacteria that will survive treatment or the number of individuals who will fall below the poverty line), then one ought consider the use of quantile regression.
If distinct strata exist, one should consider developing separate regression models for each stratum, a technique known as ecological regression [] If one’s interest is in classification or if the majority of one’s predictors are dichotomous, then one should consider the use of classification and regression trees (CART) […] If the outcomes are limited to success or failure, one ought employ logistic regression. If the outcomes are counts rather than continuous measurements, one should employ a generalized linear model (GLM).”

“Linear regression is a much misunderstood and mistaught concept. If a linear model provides a good fit to data, this does not imply that a plot of the dependent variable with respect to the predictor would be a straight line, only that a plot of the dependent variable with respect to some not-necessarily monotonic function of the predictor would be a line. For example, y = A + B log[x] and y = A cos(x) + B sin(x) are both linear models whose coefficients A and B might be derived by OLS or LAD methods. Y = Ax5 is a linear model. Y = xA is nonlinear. […] Perfect correlation (ρ2 = 1) does not imply that two variables are identical but rather that one of them, Y, say, can be written as a linear function of the other, Y = a + bX, where b is the slope of the regression line and a is the intercept. […] Nonlinear regression methods are appropriate when the form of the nonlinear model is known in advance. For example, a typical pharmacological model will have the form A exp[bX] + C exp[dW]. The presence of numerous locally optimal but globally suboptimal solutions creates challenges, and validation is essential. […] To be avoided are a recent spate of proprietary algorithms available solely in software form that guarantee to find a best-fitting solution. In the words of John von Neumann, “With four parameters I can fit an elephant and with five I can make him wiggle his trunk.””

“[T]he most common errors associated with quantile regression include: 1. Failing to evaluate whether the model form is appropriate, for example, forcing linear fit through an obvious nonlinear response. (Of course, this is also a concern with mean regression, OLS, LAD, or EIV.) 2. Trying to over interpret a single quantile estimate (say 0.85) with a statistically significant nonzero slope (p < 0.05) when the majority of adjacent quantiles (say 0.5 − 0.84 and 0.86 − 0.95) are clearly zero (p > 0.20). 3. Failing to use all the information a quantile regression provides. Even if you think you are only interested in relations near maximum (say 0.90 − 0.99), your understanding will be enhanced by having estimates (and sampling variation via confidence intervals) across a wide range of quantiles (say 0.01 − 0.99).”

“Survival analysis is used to assess time-to-event data including time to recovery and time to revision. Most contemporary survival analysis is built around the Cox model […] Possible sources of error in the application of this model include all of the following: *Neglecting the possible dependence of the baseline function λ0 on the predictors. *Overmatching, that is, using highly correlated predictors that may well mask each other’s effects. *Using the parametric Breslow or Kaplan–Meier estimators of the survival function rather than the nonparametric Nelson–Aalen estimator. *Excluding patients based on post-hoc criteria. Pathology workups on patients who died during the study may reveal that some of them were wrongly diagnosed. Regardless, patients cannot be eliminated from the study as we lack the information needed to exclude those who might have been similarly diagnosed but who are still alive at the conclusion of the study. *Failure to account for differential susceptibility (frailty) of the patients”.

“In reporting the results of your modeling efforts, you need to be explicit about the methods used, the assumptions made, the limitations on your model’s range of application, potential sources of bias, and the method of validation […] Multivariable regression is plagued by the same problems univariate regression is heir to, plus many more of its own. […] If choosing the correct functional form of a model in a univariate case presents difficulties, consider that in the case of k variables, there are k linear terms (should we use logarithms? should we add polynomial terms?) and k(k − 1) first-order cross products of the form xixk. Should we include any of the k(k − 1)(k − 2) second-order cross products? A common error is to attribute the strength of a relationship to the magnitude of the predictor’s regression coefficient […] Just scale the units in which the predictor is reported to see how erroneous such an assumption is. […] One of the main problems in multiple regression is multicollinearity, which is the correlation among predictors. Even relatively weak levels of multicollinearity are enough to generate instability in multiple regression models […]. A simple solution is to evaluate the correlation matrix M among predictors, and use this matrix to choose the predictors that are less correlated. […] Test M for each predictor, using the variance inflation factor (VIF) given by (1 − R2) − 1, where R2 is the multiple coefficient of determination of the predictor against all other predictors. If VIF is large for a given predictor (>8, say) delete this predictor and reestimate the model. […] Dropping collinear variables from the analysis can result in a substantial loss of power”.

“It can be difficult to predict the equilibrium point for a supply-and-demand model, because producers change their price in response to demand and consumers change their demand in response to price. Failing to account for endogeneous variables can lead to biased estimates of the regression coefficients.
Endogeneity can arise not only as a result of omitted variables, but of measurement error, autocorrelated errors, simultaneity, and sample selection errors. One solution is to make use of instrument variables that should satisfy two conditions: 1. They should be correlated with the endogenous explanatory variables, conditional on the other covariates. 2. They should not be correlated with the error term in the explanatory equation, that is, they should not suffer from the same problem as the original predictor.
Instrumental variables are commonly used to estimate causal effects in contexts in which controlled experiments are not possible, for example in estimating the effects of past and projected government policies.”

“[T]he following errors are frequently associated with factor analysis: *Applying it to datasets with too few cases in relation to the number of variables analyzed […], without noticing that correlation coefficients have very wide confidence intervals in small samples. *Using oblique rotation to get a number of factors bigger or smaller than the number of factors obtained in the initial extraction by principal components, as a way to show the validity of a questionnaire. For example, obtaining only one factor by principal components and using the oblique rotation to justify that there were two differentiated factors, even when the two factors were correlated and the variance explained by the second factor was very small. *Confusion among the total variance explained by a factor and the variance explained in the reduced factorial space. In this way a researcher interpreted that a given group of factors explaining 70% of the variance before rotation could explain 100% of the variance after rotation.”

“Poisson regression is appropriate when the dependent variable is a count, as is the case with the arrival of individuals in an emergency room. It is also applicable to the spatial distributions of tornadoes and of clusters of galaxies.2 To be applicable, the events underlying the outcomes must be independent […] A strong assumption of the Poisson regression model is that the mean and variance are equal (equidispersion). When the variance of a sample exceeds the mean, the data are said to be overdispersed. Fitting the Poisson model to overdispersed data can lead to misinterpretation of coefficients due to poor estimates of standard errors. Naturally occurring count data are often overdispersed due to correlated errors in time or space, or other forms of nonindependence of the observations. One solution is to fit a Poisson model as if the data satisfy the assumptions, but adjust the model-based standard errors usually employed. Another solution is to estimate a negative binomial model, which allows for scalar overdispersion.”

“When multiple observations are collected for each principal sampling unit, we refer to the collected information as panel data, correlated data, or repeated measures. […] The dependency of observations violates one of the tenets of regression analysis: that observations are supposed to be independent and identically distributed or IID. Several concerns arise when observations are not independent. First, the effective number of observations (that is, the effective amount of information) is less than the physical number of observations […]. Second, any model that fails to specifically address [the] correlation is incorrect […]. Third, although the correct specification of the correlation will yield the most efficient estimator, that specification is not the only one to yield a consistent estimator.”

“The basic issue in deciding whether to utilize a fixed- or random-effects model is whether the sampling units (for which multiple observations are collected) represent the collection of most or all of the entities for which inference will be drawn. If so, the fixed-effects estimator is to be preferred. On the other hand, if those same sampling units represent a random sample from a larger population for which we wish to make inferences, then the random-effects estimator is more appropriate. […] Fixed- and random-effects models address unobserved heterogeneity. The random-effects model assumes that the panel-level effects are randomly distributed. The fixed-effects model assumes a constant disturbance that is a special case of the random-effects model. If the random-effects assumption is correct, then the random-effects estimator is more efficient than the fixed-effects estimator. If the random-effects assumption does not hold […], then the random effects model is not consistent. To help decide whether the fixed- or random-effects models is more appropriate, use the Durbin–Wu–Hausman3 test comparing coefficients from each model. […] Although fixed-effects estimators and random-effects estimators are referred to as subject-specific estimators, the GEEs available through PROC GENMOD in SAS or xtgee in Stata, are called population-averaged estimators. This label refers to the interpretation of the fitted regression coefficients. Subject-specific estimators are interpreted in terms of an effect for a given panel, whereas population-averaged estimators are interpreted in terms of an affect averaged over panels.”

“A favorite example in comparing subject-specific and population-averaged estimators is to consider the difference in interpretation of regression coefficients for a binary outcome model on whether a child will exhibit symptoms of respiratory illness. The predictor of interest is whether or not the child’s mother smokes. Thus, we have repeated observations on children and their mothers. If we were to fit a subject-specific model, we would interpret the coefficient on smoking as the change in likelihood of respiratory illness as a result of the mother switching from not smoking to smoking. On the other hand, the interpretation of the coefficient in a population-averaged model is the likelihood of respiratory illness for the average child with a nonsmoking mother compared to the likelihood for the average child with a smoking mother. Both models offer equally valid interpretations. The interpretation of interest should drive model selection; some studies ultimately will lead to fitting both types of models. […] In addition to model-based variance estimators, fixed-effects models and GEEs [Generalized Estimating Equation models] also admit modified sandwich variance estimators. SAS calls this the empirical variance estimator. Stata refers to it as the Robust Cluster estimator. Whatever the name, the most desirable property of the variance estimator is that it yields inference for the regression coefficients that is robust to misspecification of the correlation structure. […] Specification of GEEs should include careful consideration of reasonable correlation structure so that the resulting estimator is as efficient as possible. To protect against misspecification of the correlation structure, one should base inference on the modified sandwich variance estimator. This is the default estimator in SAS, but the user must specify it in Stata.”

“There are three main approaches to [model] validation: 1. Independent verification (obtained by waiting until the future arrives or through the use of surrogate variables). 2. Splitting the sample (using one part for calibration, the other for verification) 3. Resampling (taking repeated samples from the original sample and refitting the model each time).
Goodness of fit is no guarantee of predictive success. […] Splitting the sample into two parts, one for estimating the model parameters, the other for verification, is particularly appropriate for validating time series models in which the emphasis is on prediction or reconstruction. If the observations form a time series, the more recent observations should be reserved for validation purposes. Otherwise, the data used for validation should be drawn at random from the entire sample. Unfortunately, when we split the sample and use only a portion of it, the resulting estimates will be less precise. […] The proportion to be set aside for validation purposes will depend upon the loss function. If both the goodness-of-fit error in the calibration sample and the prediction error in the validation sample are based on mean-squared error, Picard and Berk [1990] report that we can minimize their sum by using between a quarter and a third of the sample for validation purposes.”

November 13, 2017 Posted by | Books, Statistics | Leave a comment

Organic Chemistry (II)

I have included some observations from the second half of the book below, as well as some links to topics covered.

“[E]nzymes are used routinely to catalyse reactions in the research laboratory, and for a variety of industrial processes involving pharmaceuticals, agrochemicals, and biofuels. In the past, enzymes had to be extracted from natural sources — a process that was both expensive and slow. But nowadays, genetic engineering can incorporate the gene for a key enzyme into the DNA of fast growing microbial cells, allowing the enzyme to be obtained more quickly and in far greater yield. Genetic engineering has also made it possible to modify the amino acids making up an enzyme. Such modified enzymes can prove more effective as catalysts, accept a wider range of substrates, and survive harsher reaction conditions. […] New enzymes are constantly being discovered in the natural world as well as in the laboratory. Fungi and bacteria are particularly rich in enzymes that allow them to degrade organic compounds. It is estimated that a typical bacterial cell contains about 3,000 enzymes, whereas a fungal cell contains 6,000. Considering the variety of bacterial and fungal species in existence, this represents a huge reservoir of new enzymes, and it is estimated that only 3 per cent of them have been investigated so far.”

“One of the most important applications of organic chemistry involves the design and synthesis of pharmaceutical agents — a topic that is defined as medicinal chemistry. […] In the 19th century, chemists isolated chemical components from known herbs and extracts. Their aim was to identify a single chemical that was responsible for the extract’s pharmacological effects — the active principle. […] It was not long before chemists synthesized analogues of active principles. Analogues are structures which have been modified slightly from the original active principle. Such modifications can often improve activity or reduce side effects. This led to the concept of the lead compound — a compound with a useful pharmacological activity that could act as the starting point for further research. […] The first half of the 20th century culminated in the discovery of effective antimicrobial agents. […] The 1960s can be viewed as the birth of rational drug design. During that period there were important advances in the design of effective anti-ulcer agents, anti-asthmatics, and beta-blockers for the treatment of high blood pressure. Much of this was based on trying to understand how drugs work at the molecular level and proposing theories about why some compounds were active and some were not.”

“[R]ational drug design was boosted enormously towards the end of the century by advances in both biology and chemistry. The sequencing of the human genome led to the identification of previously unknown proteins that could serve as potential drug targets. […] Advances in automated, small-scale testing procedures (high-throughput screening) also allowed the rapid testing of potential drugs. In chemistry, advances were made in X-ray crystallography and NMR spectroscopy, allowing scientists to study the structure of drugs and their mechanisms of action. Powerful molecular modelling software packages were developed that allowed researchers to study how a drug binds to a protein binding site. […] the development of automated synthetic methods has vastly increased the number of compounds that can be synthesized in a given time period. Companies can now produce thousands of compounds that can be stored and tested for pharmacological activity. Such stores have been called chemical libraries and are routinely tested to identify compounds capable of binding with a specific protein target. These advances have boosted medicinal chemistry research over the last twenty years in virtually every area of medicine.”

“Drugs interact with molecular targets in the body such as proteins and nucleic acids. However, the vast majority of clinically useful drugs interact with proteins, especially receptors, enzymes, and transport proteins […] Enzymes are […] important drug targets. Drugs that bind to the active site and prevent the enzyme acting as a catalyst are known as enzyme inhibitors. […] Enzymes are located inside cells, and so enzyme inhibitors have to cross cell membranes in order to reach them—an important consideration in drug design. […] Transport proteins are targets for a number of therapeutically important drugs. For example, a group of antidepressants known as selective serotonin reuptake inhibitors prevent serotonin being transported into neurons by transport proteins.”

“The main pharmacokinetic factors are absorption, distribution, metabolism, and excretion. Absorption relates to how much of an orally administered drug survives the digestive enzymes and crosses the gut wall to reach the bloodstream. Once there, the drug is carried to the liver where a certain percentage of it is metabolized by metabolic enzymes. This is known as the first-pass effect. The ‘survivors’ are then distributed round the body by the blood supply, but this is an uneven process. The tissues and organs with the richest supply of blood vessels receive the greatest proportion of the drug. Some drugs may get ‘trapped’ or sidetracked. For example fatty drugs tend to get absorbed in fat tissue and fail to reach their target. The kidneys are chiefly responsible for the excretion of drugs and their metabolites.”

“Having identified a lead compound, it is important to establish which features of the compound are important for activity. This, in turn, can give a better understanding of how the compound binds to its molecular target. Most drugs are significantly smaller than molecular targets such as proteins. This means that the drug binds to quite a small region of the protein — a region known as the binding site […]. Within this binding site, there are binding regions that can form different types of intermolecular interactions such as van der Waals interactions, hydrogen bonds, and ionic interactions. If a drug has functional groups and substituents capable of interacting with those binding regions, then binding can take place. A lead compound may have several groups that are capable of forming intermolecular interactions, but not all of them are necessarily needed. One way of identifying the important binding groups is to crystallize the target protein with the drug bound to the binding site. X-ray crystallography then produces a picture of the complex which allows identification of binding interactions. However, it is not always possible to crystallize target proteins and so a different approach is needed. This involves synthesizing analogues of the lead compound where groups are modified or removed. Comparing the activity of each analogue with the lead compound can then determine whether a particular group is important or not. This is known as an SAR study, where SAR stands for structure–activity relationships.” Once the important binding groups have been identified, the pharmacophore for the lead compound can be defined. This specifies the important binding groups and their relative position in the molecule.”

“One way of identifying the active conformation of a flexible lead compound is to synthesize rigid analogues where the binding groups are locked into defined positions. This is known as rigidification or conformational restriction. The pharmacophore will then be represented by the most active analogue. […] A large number of rotatable bonds is likely to have an adverse effect on drug activity. This is because a flexible molecule can adopt a large number of conformations, and only one of these shapes corresponds to the active conformation. […] In contrast, a totally rigid molecule containing the required pharmacophore will bind the first time it enters the binding site, resulting in greater activity. […] It is also important to optimize a drug’s pharmacokinetic properties such that it can reach its target in the body. Strategies include altering the drug’s hydrophilic/hydrophobic properties to improve absorption, and the addition of substituents that block metabolism at specific parts of the molecule. […] The drug candidate must [in general] have useful activity and selectivity, with minimal side effects. It must have good pharmacokinetic properties, lack toxicity, and preferably have no interactions with other drugs that might be taken by a patient. Finally, it is important that it can be synthesized as cheaply as possible”.

“Most drugs that have reached clinical trials for the treatment of Alzheimer’s disease have failed. Between 2002 and 2012, 244 novel compounds were tested in 414 clinical trials, but only one drug gained approval. This represents a failure rate of 99.6 per cent as against a failure rate of 81 per cent for anti-cancer drugs.”

“It takes about ten years and £160 million to develop a new pesticide […] The volume of global sales increased 47 per cent in the ten-year period between 2002 and 2012, while, in 2012, total sales amounted to £31 billion. […] In many respects, agrochemical research is similar to pharmaceutical research. The aim is to find pesticides that are toxic to ‘pests’, but relatively harmless to humans and beneficial life forms. The strategies used to achieve this goal are also similar. Selectivity can be achieved by designing agents that interact with molecular targets that are present in pests, but not other species. Another approach is to take advantage of any metabolic reactions that are unique to pests. An inactive prodrug could then be designed that is metabolized to a toxic compound in the pest, but remains harmless in other species. Finally, it might be possible to take advantage of pharmacokinetic differences between pests and other species, such that a pesticide reaches its target more easily in the pest. […] Insecticides are being developed that act on a range of different targets as a means of tackling resistance. If resistance should arise to an insecticide acting on one particular target, then one can switch to using an insecticide that acts on a different target. […] Several insecticides act as insect growth regulators (IGRs) and target the moulting process rather than the nervous system. In general, IGRs take longer to kill insects but are thought to cause less detrimental effects to beneficial insects. […] Herbicides control weeds that would otherwise compete with crops for water and soil nutrients. More is spent on herbicides than any other class of pesticide […] The synthetic agent 2,4-D […] was synthesized by ICI in 1940 as part of research carried out on biological weapons […] It was first used commercially in 1946 and proved highly successful in eradicating weeds in cereal grass crops such as wheat, maize, and rice. […] The compound […] is still the most widely used herbicide in the world.”

“The type of conjugated system present in a molecule determines the specific wavelength of light absorbed. In general, the more extended the conjugation, the higher the wavelength absorbed. For example, β-carotene […] is the molecule responsible for the orange colour of carrots. It has a conjugated system involving eleven double bonds, and absorbs light in the blue region of the spectrum. It appears red because the reflected light lacks the blue component. Zeaxanthin is very similar in structure to β-carotene, and is responsible for the yellow colour of corn. […] Lycopene absorbs blue-green light and is responsible for the red colour of tomatoes, rose hips, and berries. Chlorophyll absorbs red light and is coloured green. […] Scented molecules interact with olfactory receptors in the nose. […] there are around 400 different olfactory protein receptors in humans […] The natural aroma of a rose is due mainly to 2-phenylethanol, geraniol, and citronellol.”

“Over the last fifty years, synthetic materials have largely replaced natural materials such as wood, leather, wool, and cotton. Plastics and polymers are perhaps the most visible sign of how organic chemistry has changed society. […] It is estimated that production of global plastics was 288 million tons in 2012 […] Polymerization involves linking molecular strands called polymers […]. By varying the nature of the monomer, a huge range of different polymers can be synthesized with widely differing properties. The idea of linking small molecular building blocks into polymers is not a new one. Nature has been at it for millions of years using amino acid building blocks to make proteins, and nucleotide building blocks to make nucleic acids […] The raw materials for plastics come mainly from oil, which is a finite resource. Therefore, it makes sense to recycle or depolymerize plastics to recover that resource. Virtually all plastics can be recycled, but it is not necessarily economically feasible to do so. Traditional recycling of polyesters, polycarbonates, and polystyrene tends to produce inferior plastics that are suitable only for low-quality goods.”

Adipic acid.
Protease. Lipase. Amylase. Cellulase.
Reflectin.
Agonist.
Antagonist.
Prodrug.
Conformational change.
Process chemistry (chemical development).
Clinical trial.
Phenylbutazone.
Pesticide.
Dichlorodiphenyltrichloroethane.
Aldrin.
N-Methyl carbamate.
Organophosphates.
Pyrethrum.
Neonicotinoid.
Colony collapse disorder.
Ecdysone receptor.
Methoprene.
Tebufenozide.
Fungicide.
Quinone outside inhibitors (QoI).
Allelopathy.
Glyphosate.
11-cis retinal.
Chromophore.
Synthetic dyes.
Methylene blue.
Cryptochrome.
Pheromone.
Artificial sweeteners.
Miraculin.
Addition polymer.
Condensation polymer.
Polyethylene.
Polypropylene.
Polyvinyl chloride.
Bisphenol A.
Vulcanization.
Kevlar.
Polycarbonate.
Polyhydroxyalkanoates.
Bioplastic.
Nanochemistry.
Allotropy.
Allotropes of carbon.
Carbon nanotube.
Rotaxane.
π-interactions.
Molecular switch.

November 11, 2017 Posted by | Biology, Books, Botany, Chemistry, Medicine, Pharmacology, Zoology | Leave a comment

Quotes

i. “Much of the skill in doing science resides in knowing where in the hierarchy you are looking – and, as a consequence, what is relevant and what is not.” (Philip Ball – Molecules: A very Short Introduction)

ii. “…statistical software will no more make one a statistician than a scalpel will turn one into a neurosurgeon. Allowing these tools to do our thinking is a sure recipe for disaster.” (Philip Good & James Hardin, Common Errors in Statistics (and how to avoid them))

iii. “Just as 95% of research efforts are devoted to data collection, 95% of the time remaining should be spent on ensuring that the data collected warrant analysis.” (-ll-)

iv. “One reason why many statistical models are incomplete is that they do not specify the sources of randomness generating variability among agents, i.e., they do not specify why otherwise observationally identical people make different choices and have different outcomes given the same choice.” (James J. Heckman, -ll-)

v. “If a thing is not worth doing, it is not worth doing well.” (J. W. Tukey, -ll-)

vi. “Hypocrisy is the lubricant of society.” (David Hull)

vii. “Every time I fire a linguist, the performance of our speech recognition system goes up.” (Fred Jelinek)

viii. “For most of my life, one of the persons most baffled by my own work was myself.” (Benoît Mandelbrot)

ix. “I’m afraid love is just a word.” (Harry Mulisch)

x. “The worst thing about death is that you once were, and now you are not.” (José Saramago)

xi. “Sometimes the most remarkable things seem commonplace. I mean, when you think about it, jet travel is pretty freaking remarkable. You get in a plane, it defies the gravity of an entire planet by exploiting a loophole with air pressure, and it flies across distances that would take months or years to cross by any means of travel that has been significant for more than a century or three. You hurtle above the earth at enough speed to kill you instantly should you bump into something, and you can only breathe because someone built you a really good tin can that has seams tight enough to hold in a decent amount of air. Hundreds of millions of man-hours of work and struggle and research, blood, sweat, tears, and lives have gone into the history of air travel, and it has totally revolutionized the face of our planet and societies.
But get on any flight in the country, and I absolutely promise you that you will find someone who, in the face of all that incredible achievement, will be willing to complain about the drinks. The drinks, people.” (Jim Butcher, Summer Knight)

xii. “The best way to keep yourself from doing something grossly self-destructive and stupid is to avoid the temptation to do it. For example, it is far easier to fend off inappropriate amorous desires if one runs screaming from the room every time a pretty girl comes in.” (Jim Butcher, Proven Guilty)

xiii. “One certain effect of war is to diminish freedom of expression. Patriotism becomes the order of the day, and those who question the war are seen as traitors, to be silenced and imprisoned.” (Howard Zinn)

xiv. “While inexact models may mislead, attempting to allow for every contingency a priori is impractical. Thus models must be built by an iterative feedback process in which an initial parsimonious model may be modified when diagnostic checks applied to residuals indicate the need.” (G. E. P. Box)

xv. “In our analysis of complex systems (like the brain and language) we must avoid the trap of trying to find master keys. Because of the mechanisms by which complex systems structure themselves, single principles provide inadequate descriptions. We should rather be sensitive to complex and self-organizing interactions and appreciate the play of patterns that perpetually transforms the system itself as well as the environment in which it operates.” (Paul Cilliers)

xvi. “The nature of the chemical bond is the problem at the heart of all chemistry.” (Bryce Crawford)

xvii. “When there’s a will to fail, obstacles can be found.” (John McCarthy)

xviii. “We understand human mental processes only slightly better than a fish understands swimming.” (-ll-)

xix. “He who refuses to do arithmetic is doomed to talk nonsense.” (-ll-)

xx. “The trouble with men is that they have limited minds. That’s the trouble with women, too.” (Joanna Russ)

 

November 10, 2017 Posted by | Books, Quotes/aphorisms | Leave a comment

Organic Chemistry (I)

This book‘s a bit longer than most ‘A very short introduction to…‘ publications, and it’s quite dense at times and included a lot of interesting stuff. It took me a while to finish it as I put it away a while back when I hit some of the more demanding content, but I did pick it up later and I really enjoyed most of the coverage. In the end I decided that I wouldn’t be doing the book justice if I were to limit my coverage of it to just one post, so this will be only the first of two posts of coverage of this book, covering roughly the first half of it.

As usual I have included in my post both some observations from the book (…and added a few links to these quotes where I figured they might be helpful) as well as some wiki links to topics discussed in the book.

“Organic chemistry is a branch of chemistry that studies carbon-based compounds in terms of their structure, properties, and synthesis. In contrast, inorganic chemistry covers the chemistry of all the other elements in the periodic table […] carbon-based compounds are crucial to the chemistry of life. [However] organic chemistry has come to be defined as the chemistry of carbon-based compounds, whether they originate from a living system or not. […] To date, 16 million compounds have been synthesized in organic chemistry laboratories across the world, with novel compounds being synthesized every day. […] The list of commodities that rely on organic chemistry include plastics, synthetic fabrics, perfumes, colourings, sweeteners, synthetic rubbers, and many other items that we use every day.”

“For a neutral carbon atom, there are six electrons occupying the space around the nucleus […] The electrons in the outer shell are defined as the valence electrons and these determine the chemical properties of the atom. The valence electrons are easily ‘accessible’ compared to the two electrons in the first shell. […] There is great significance in carbon being in the middle of the periodic table. Elements which are close to the left-hand side of the periodic table can lose their valence electrons to form positive ions. […] Elements on the right-hand side of the table can gain electrons to form negatively charged ions. […] The impetus for elements to form ions is the stability that is gained by having a full outer shell of electrons. […] Ion formation is feasible for elements situated to the left or the right of the periodic table, but it is less feasible for elements in the middle of the table. For carbon to gain a full outer shell of electrons, it would have to lose or gain four valence electrons, but this would require far too much energy. Therefore, carbon achieves a stable, full outer shell of electrons by another method. It shares electrons with other elements to form bonds. Carbon excels in this and can be considered chemistry’s ultimate elemental socialite. […] Carbon’s ability to form covalent bonds with other carbon atoms is one of the principle reasons why so many organic molecules are possible. Carbon atoms can be linked together in an almost limitless way to form a mind-blowing variety of carbon skeletons. […] carbon can form a bond to hydrogen, but it can also form bonds to atoms such as nitrogen, phosphorus, oxygen, sulphur, fluorine, chlorine, bromine, and iodine. As a result, organic molecules can contain a variety of different elements. Further variety can arise because it is possible for carbon to form double bonds or triple bonds to a variety of other atoms. The most common double bonds are formed between carbon and oxygen, carbon and nitrogen, or between two carbon atoms. […] The most common triple bonds are found between carbon and nitrogen, or between two carbon atoms.”

[C]hirality has huge importance. The two enantiomers of a chiral molecule behave differently when they interact with other chiral molecules, and this has important consequences in the chemistry of life. As an analogy, consider your left and right hands. These are asymmetric in shape and are non-superimposable mirror images. Similarly, a pair of gloves are non-superimposable mirror images. A left hand will fit snugly into a left-hand glove, but not into a right-hand glove. In the molecular world, a similar thing occurs. The proteins in our bodies are chiral molecules which can distinguish between the enantiomers of other molecules. For example, enzymes can distinguish between the two enantiomers of a chiral compound and catalyse a reaction with one of the enantiomers but not the other.”

“A key concept in organic chemistry is the functional group. A functional group is essentially a distinctive arrangement of atoms and bonds. […] Functional groups react in particular ways, and so it is possible to predict how a molecule might react based on the functional groups that are present. […] it is impossible to build a molecule atom by atom. Instead, target molecules are built by linking up smaller molecules. […] The organic chemist needs to have a good understanding of the reactions that are possible between different functional groups when choosing the molecular building blocks to be used for a synthesis. […] There are many […] reasons for carrying out FGTs [functional group transformations], especially when synthesizing complex molecules. For example, a starting material or a synthetic intermediate may lack a functional group at a key position of the molecular structure. Several reactions may then be required to introduce that functional group. On other occasions, a functional group may be added to a particular position then removed at a later stage. One reason for adding such a functional group would be to block an unwanted reaction at that position of the molecule. Another common situation is where a reactive functional group is converted to a less reactive functional group such that it does not interfere with a subsequent reaction. Later on, the original functional group is restored by another functional group transformation. This is known as a protection/deprotection strategy. The more complex the target molecule, the greater the synthetic challenge. Complexity is related to the number of rings, functional groups, substituents, and chiral centres that are present. […] The more reactions that are involved in a synthetic route, the lower the overall yield. […] retrosynthesis is a strategy by which organic chemists design a synthesis before carrying it out in practice. It is called retrosynthesis because the design process involves studying the target structure and working backwards to identify how that molecule could be synthesized from simpler starting materials. […] a key stage in retrosynthesis is identifying a bond that can be ‘disconnected’ to create those simpler molecules.”

“[V]ery few reactions produce the spectacular visual and audible effects observed in chemistry demonstrations. More typically, reactions involve mixing together two colourless solutions to produce another colourless solution. Temperature changes are a bit more informative. […] However, not all reactions generate heat, and monitoring the temperature is not a reliable way of telling whether the reaction has gone to completion or not. A better approach is to take small samples of the reaction solution at various times and to test these by chromatography or spectroscopy. […] If a reaction is taking place very slowly, different reaction conditions could be tried to speed it up. This could involve heating the reaction, carrying out the reaction under pressure, stirring the contents vigorously, ensuring that the reaction is carried out in a dry atmosphere, using a different solvent, using a catalyst, or using one of the reagents in excess. […] There are a large number of variables that can affect how efficiently reactions occur, and organic chemists in industry are often employed to develop the ideal conditions for a specific reaction. This is an area of organic chemistry known as chemical development. […] Once a reaction has been carried out, it is necessary to isolate and purify the reaction product. This often proves more time-consuming than carrying out the reaction itself. Ideally, one would remove the solvent used in the reaction and be left with the product. However, in most reactions this is not possible as other compounds are likely to be present in the reaction mixture. […] it is usually necessary to carry out procedures that will separate and isolate the desired product from these other compounds. This is known as ‘working up’ the reaction.”

“Proteins are large molecules (macromolecules) which serve a myriad of purposes, and are essentially polymers constructed from molecular building blocks called amino acids […]. In humans, there are twenty different amino acids having the same ‘head group’, consisting of a carboxylic acid and an amine attached to the same carbon atom […] The amino acids are linked up by the carboxylic acid of one amino acid reacting with the amine group of another to form an amide link. Since a protein is being produced, the amide bond is called a peptide bond, and the final protein consists of a polypeptide chain (or backbone) with different side chains ‘hanging off’ the chain […]. The sequence of amino acids present in the polypeptide sequence is known as the primary structure. Once formed, a protein folds into a specific 3D shape […] Nucleic acids […] are another form of biopolymer, and are formed from molecular building blocks called nucleotides. These link up to form a polymer chain where the backbone consists of alternating sugar and phosphate groups. There are two forms of nucleic acid — deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). In DNA, the sugar is deoxyribose , whereas the sugar in RNA is ribose. Each sugar ring has a nucleic acid base attached to it. For DNA, there are four different nucleic acid bases called adenine (A), thymine (T), cytosine (C), and guanine (G) […]. These bases play a crucial role in the overall structure and function of nucleic acids. […] DNA is actually made up of two DNA strands […] where the sugar-phosphate backbones are intertwined to form a double helix. The nucleic acid bases point into the centre of the helix, and each nucleic acid base ‘pairs up’ with a nucleic acid base on the opposite strand through hydrogen bonding. The base pairing is specifically between adenine and thymine, or between cytosine and guanine. This means that one polymer strand is complementary to the other, a feature that is crucial to DNA’s function as the storage molecule for genetic information. […]  [E]ach strand […] act as the template for the creation of a new strand to produce two identical ‘daughter’ DNA double helices […] [A] genetic alphabet of four letters (A, T, G, C) […] code for twenty amino acids. […] [A]n amino acid is coded, not by one nucleotide, but by a set of three. The number of possible triplet combinations using four ‘letters’ is more than enough to encode all the amino acids.”

“Proteins have a variety of functions. Some proteins, such as collagen, keratin, and elastin, have a structural role. Others catalyse life’s chemical reactions and are called enzymes. They have a complex 3D shape, which includes a cavity called the active site […]. This is where the enzyme binds the molecules (substrates) that undergo the enzyme-catalysed reaction. […] A substrate has to have the correct shape to fit an enzyme’s active site, but it also needs binding groups to interact with that site […]. These interactions hold the substrate in the active site long enough for a reaction to occur, and typically involve hydrogen bonds, as well as van der Waals and ionic interactions. When a substrate binds, the enzyme normally undergoes an induced fit. In other words, the shape of the active site changes slightly to accommodate the substrate, and to hold it as tightly as possible. […] Once a substrate is bound to the active site, amino acids in the active site catalyse the subsequent reaction.”

“Proteins called receptors are involved in chemical communication between cells and respond to chemical messengers called neurotransmitters if they are released from nerves, or hormones if they are released by glands. Most receptors are embedded in the cell membrane, with part of their structure exposed on the outer surface of the cell membrane, and another part exposed on the inner surface. On the outer surface they contain a binding site that binds the molecular messenger. An induced fit then takes place that activates the receptor. This is very similar to what happens when a substrate binds to an enzyme […] The induced fit is crucial to the mechanism by which a receptor conveys a message into the cell — a process known as signal transduction. By changing shape, the protein initiates a series of molecular events that influences the internal chemistry within the cell. For example, some receptors are part of multiprotein complexes called ion channels. When the receptor changes shape, it causes the overall ion channel to change shape. This opens up a central pore allowing ions to flow across the cell membrane. The ion concentration within the cell is altered, and that affects chemical reactions within the cell, which ultimately lead to observable results such as muscle contraction. Not all receptors are membrane-bound. For example, steroid receptors are located within the cell. This means that steroid hormones need to cross the cell membrane in order to reach their target receptors. Transport proteins are also embedded in cell membranes and are responsible for transporting polar molecules such as amino acids into the cell. They are also important in controlling nerve action since they allow nerves to capture released neurotransmitters, such that they have a limited period of action.”

“RNA […] is crucial to protein synthesis (translation). There are three forms of RNA — messenger RNA (mRNA), transfer RNA (tRNA), and ribosomal RNA (rRNA). mRNA carries the genetic code for a particular protein from DNA to the site of protein production. Essentially, mRNA is a single-strand copy of a specific section of DNA. The process of copying that information is known as transcription. tRNA decodes the triplet code on mRNA by acting as a molecular adaptor. At one end of tRNA, there is a set of three bases (the anticodon) that can base pair to a set of three bases on mRNA (the codon). An amino acid is linked to the other end of the tRNA and the type of amino acid present is related to the anticodon that is present. When tRNA with the correct anticodon base pairs to the codon on mRNA, it brings the amino acid encoded by that codon. rRNA is a major constituent of a structure called a ribosome, which acts as the factory for protein production. The ribosome binds mRNA then coordinates and catalyses the translation process.”

Organic chemistry.
Carbon.
Stereochemistry.
Delocalization.
Hydrogen bond.
Van der Waals forces.
Ionic bonding.
Chemoselectivity.
Coupling reaction.
Chemical polarity.
Crystallization.
Elemental analysis.
NMR spectroscopy.
Polymerization.
Miller–Urey experiment.
Vester-Ulbricht hypothesis.
Oligonucleotide.
RNA world.
Ribozyme.

November 9, 2017 Posted by | Biology, Books, Chemistry, Genetics | Leave a comment