Econstudentlog

Big Data (II)

Below I have added a few observation from the last half of the book, as well as some coverage-related links to topics of interest.

“With big data, using correlation creates […] problems. If we consider a massive dataset, algorithms can be written that, when applied, return a large number of spurious correlations that are totally independent of the views, opinions, or hypotheses of any human being. Problems arise with false correlations — for example, divorce rate and margarine consumption […]. [W]hen the number of variables becomes large, the number of spurious correlations also increases. This is one of the main problems associated with trying to extract useful information from big data, because in doing so, as with mining big data, we are usually looking for patterns and correlations. […] one of the reasons Google Flu Trends failed in its predictions was because of these problems. […] The Google Flu Trends project hinged on the known result that there is a high correlation between the number of flu-related online searches and visits to the doctor’s surgery. If a lot of people in a particular area are searching for flu-related information online, it might then be possible to predict the spread of flu cases to adjoining areas. Since the interest is in finding trends, the data can be anonymized and hence no consent from individuals is required. Using their five-year accumulation of data, which they limited to the same time-frame as the CDC data, and so collected only during the flu season, Google counted the weekly occurrence of each of the fifty million most common search queries covering all subjects. These search query counts were then compared with the CDC flu data, and those with the highest correlation were used in the flu trends model. […] The historical data provided a baseline from which to assess current flu activity on the chosen search terms and by comparing the new real-time data against this, a classification on a scale from 1 to 5, where 5 signified the most severe, was established. Used in the 2011–12 and 2012–13 US flu seasons, Google’s big data algorithm famously failed to deliver. After the flu season ended, its predictions were checked against the CDC’s actual data. […] the Google Flu Trends algorithm over-predicted the number of flu cases by at least 50 per cent during the years it was used.” [For more details on why blind/mindless hypothesis testing/p-value hunting on big data sets is usually a terrible idea, see e.g. Burnham & Anderson, US]

“The data Google used [in the Google Flu Trends algorithm], collected selectively from search engine queries, produced results [with] obvious bias […] for example by eliminating everyone who does not use a computer and everyone using other search engines. Another issue that may have led to poor results was that customers searching Google on ‘flu symptoms’ would probably have explored a number of flu-related websites, resulting in their being counted several times and thus inflating the numbers. In addition, search behaviour changes over time, especially during an epidemic, and this should be taken into account by updating the model regularly. Once errors in prediction start to occur, they tend to cascade, which is what happened with the Google Flu Trends predictions: one week’s errors were passed along to the next week. […] [Similarly,] the Ebola prediction figures published by WHO [during the West African Ebola virus epidemic] were over 50 per cent higher than the cases actually recorded. The problems with both the Google Flu Trends and Ebola analyses were similar in that the prediction algorithms used were based only on initial data and did not take into account changing conditions. Essentially, each of these models assumed that the number of cases would continue to grow at the same rate in the future as they had before the medical intervention began. Clearly, medical and public health measures could be expected to have positive effects and these had not been integrated into the model.”

“Every time a patient visits a doctor’s office or hospital, electronic data is routinely collected. Electronic health records constitute legal documentation of a patient’s healthcare contacts: details such as patient history, medications prescribed, and test results are recorded. Electronic health records may also include sensor data such as Magnetic Resonance Imaging (MRI) scans. The data may be anonymized and pooled for research purposes. It is estimated that in 2015, an average hospital in the USA will store over 600 Tb of data, most of which is unstructured. […] Typically, the human genome contains about 20,000 genes and mapping such a genome requires about 100 Gb of data. […] The interdisciplinary field of bioinformatics has flourished as a consequence of the need to manage and analyze the big data generated by genomics. […] Cloud-based systems give authorized users access to data anywhere in the world. To take just one example, the NHS plans to make patient records available via smartphone by 2018. These developments will inevitably generate more attacks on the data they employ, and considerable effort will need to be expended in the development of effective security methods to ensure the safety of that data. […] There is no absolute certainty on the Web. Since e-documents can be modified and updated without the author’s knowledge, they can easily be manipulated. This situation could be extremely damaging in many different situations, such as the possibility of someone tampering with electronic medical records. […] [S]ome of the problems facing big data systems [include] ensuring they actually work as intended, [that they] can be fixed when they break down, and [that they] are tamper-proof and accessible only to those with the correct authorization.”

“With transactions being made through sales and auction bids, eBay generates approximately 50 Tb of data a day, collected from every search, sale, and bid made on their website by a claimed 160 million active users in 190 countries. […] Amazon collects vast amounts of data including addresses, payment information, and details of everything an individual has ever looked at or bought from them. Amazon uses its data in order to encourage the customer to spend more money with them by trying to do as much of the customer’s market research as possible. In the case of books, for example, Amazon needs to provide not only a huge selection but to focus recommendations on the individual customer. […] Many customers use smartphones with GPS capability, allowing Amazon to collect data showing time and location. This substantial amount of data is used to construct customer profiles allowing similar individuals and their recommendations to be matched. Since 2013, Amazon has been selling customer metadata to advertisers in order to promote their Web services operation […] Netflix collects and uses huge amounts of data to improve customer service, such as offering recommendations to individual customers while endeavouring to provide reliable streaming of its movies. Recommendation is at the heart of the Netflix business model and most of its business is driven by the data-based recommendations it is able to offer customers. Netflix now tracks what you watch, what you browse, what you search for, and the day and time you do all these things. It also records whether you are using an iPad, TV, or something else. […] As well as collecting search data and star ratings, Netflix can now keep records on how often users pause or fast forward, and whether or not they finish watching each programme they start. They also monitor how, when, and where they watched the programme, and a host of other variables too numerous to mention.”

“Data science is becoming a popular study option in universities but graduates so far have been unable to meet the demands of commerce and industry, where positions in data science offer high salaries to experienced applicants. Big data for commercial enterprises is concerned with profit, and disillusionment will set in quickly if an over-burdened data analyst with insufficient experience fails to deliver the expected positive results. All too often, firms are asking for a one-size-fits-all model of data scientist who is expected to be competent in everything from statistical analysis to data storage and data security.”

“In December 2016, Yahoo! announced that a data breach involving over one billion user accounts had occurred in August 2013. Dubbed the biggest ever cyber theft of personal data, or at least the biggest ever divulged by any company, thieves apparently used forged cookies, which allowed them access to accounts without the need for passwords. This followed the disclosure of an attack on Yahoo! in 2014, when 500 million accounts were compromised. […] The list of big data security breaches increases almost daily. Data theft, data ransom, and data sabotage are major concerns in a data-centric world. There have been many scares regarding the security and ownership of personal digital data. Before the digital age we used to keep photos in albums and negatives were our backup. After that, we stored our photos electronically on a hard-drive in our computer. This could possibly fail and we were wise to have back-ups but at least the files were not publicly accessible. Many of us now store data in the Cloud. […] If you store all your photos in the Cloud, it’s highly unlikely with today’s sophisticated systems that you would lose them. On the other hand, if you want to delete something, maybe a photo or video, it becomes difficult to ensure all copies have been deleted. Essentially you have to rely on your provider to do this. Another important issue is controlling who has access to the photos and other data you have uploaded to the Cloud. […] although the Internet and Cloud-based computing are generally thought of as wireless, they are anything but; data is transmitted through fibre-optic cables laid under the oceans. Nearly all digital communication between continents is transmitted in this way. My email will be sent via transatlantic fibre-optic cables, even if I am using a Cloud computing service. The Cloud, an attractive buzz word, conjures up images of satellites sending data across the world, but in reality Cloud services are firmly rooted in a distributed network of data centres providing Internet access, largely through cables. Fibre-optic cables provide the fastest means of data transmission and so are generally preferable to satellites.”

Links:

Health care informatics.
Electronic health records.
European influenza surveillance network.
Overfitting.
Public Health Emergency of International Concern.
Virtual Physiological Human project.
Watson (computer).
Natural language processing.
Anthem medical data breach.
Electronic delay storage automatic calculator (EDSAC). LEO (computer). ICL (International Computers Limited).
E-commerce. Online shopping.
Pay-per-click advertising model. Google AdWords. Click fraud. Targeted advertising.
Recommender system. Collaborative filtering.
Anticipatory shipping.
BlackPOS Malware.
Data Encryption Standard algorithm. EFF DES cracker.
Advanced Encryption Standard.
Tempora. PRISM (surveillance program). Edward Snowden. WikiLeaks. Tor (anonymity network). Silk Road (marketplace). Deep web. Internet of Things.
Songdo International Business District. Smart City.
United Nations Global Pulse.

Advertisements

July 19, 2018 Posted by | Books, Computer science, Cryptography, Data, Engineering, Epidemiology, Statistics | Leave a comment

Frontiers in Statistical Quality Control (I)

The XIth International Workshop on Intelligent Statistical Quality Control took place in Sydney, Australia from August 20 to August 23, 2013. […] The 23 papers in this volume were carefully selected by the scientific program committee, reviewed by its members, revised by the authors and, finally, adapted by the editors for this volume. The focus of the book lies on three major areas of statistical quality control: statistical process control (SPC), acceptance sampling and design of experiments. The majority of the papers deal with statistical process control while acceptance sampling, and design of experiments are treated to a lesser extent.”

I’m currently reading this book. It’s quite technical and a bit longer than many of the other non-fiction books I’ve read this year (…but shorter than others; however it is still ~400 pages of content exclusively devoted to statistical papers), so it may take me a while to finish it. I figured the fact that I may not finish the book in a while was not a good argument against blogging relevant sections of the book now, especially as it’s already been some time since I read the first few chapters.

When reading a book like this one I care a lot more about understanding the concepts than about understanding the proofs, so as usual the amount of math included in the post is limited; please don’t assume it’s because there are no equations in the book.

Below I have added some ideas and observations from the first 100 pages or so of the book’s coverage.

“A growing number of [statistical quality control] applications involve monitoring with rare event data. […] The most common approaches for monitoring such processes involve using an exponential distribution to model the time between the events or using a Bernoulli distribution to model whether or not each opportunity for the event results in its occurrence. The use of a sequence of independent Bernoulli random variables leads to a geometric distribution for the number of non-occurrences between the occurrences of the rare events. One surveillance method is to use a power transformation on the exponential or geometric observations to achieve approximate normality of the in control distribution and then use a standard individuals control chart. We add to the argument that use of this approach is very counterproductive and cover some alternative approaches. We discuss the choice of appropriate performance metrics. […] Most often the focus is on detecting process deterioration, i.e., an increase in the probability of the adverse event or a decrease in the average time between events. Szarka and Woodall (2011) reviewed the extensive number of methods that have been proposed for monitoring processes using Bernoulli data. Generally, it is difficult to better the performance of the Bernoulli cumulative sum (CUSUM) chart of Reynolds and Stoumbos (1999). The Bernoulli and geometric CUSUM charts can be designed to be equivalent […] Levinson (2011) argued that control charts should not be used with healthcare rare event data because in many situations there is an assignable cause for each error, e.g., each hospital-acquired infection or serious prescription error, and each incident should be investigated. We agree that serious adverse events should be investigated whether or not they result in a control chart signal. The investigation of rare adverse events, however, and the implementation of process improvements to prevent future such errors, does not preclude using a control chart to determine if the rate of such events has increased or decreased over time. In fact, a control chart can be used to evaluate the success of any process improvement initiative.”

“The choice of appropriate performance metrics for comparing surveillance schemes for monitoring Bernoulli and exponential data is quite important. The usual Average Run Length (ARL) metric refers to the average number of points plotted on the chart until a signal is given. This metric is most clearly appropriate when the time between the plotted points is constant. […] In some cases, such as in monitoring the number of near-miss accidents, it may be informative to use a metric that reflects the actual time required to obtain an out-of-control signal. Thus one can consider the number of Bernoulli trials until an out-of-control signal is given for Bernoulli data, leading to its average, the ANOS. The ANOS will be proportional to the average time before a signal if the rate at which the Bernoulli trials are observed is constant over time. For exponentially distributed data one could consider the average time to signal, the ATS. If the process is stable, then ANOS = ARL / p and ATS = ARS * θ, where p and θ are the Bernoulli probability and the exponential mean, respectively. […] To assess out-of-control performance we believe it is most realistic to consider steady-state performance where the shift in the parameter occurs at some time after monitoring has begun. […] Under this scenario one cannot easily convert the ARL metric to the ANOS and ATS metrics. Consideration of steady state performance of competing methods is important because some methods have an implicit headstart feature that results in good zero-state performance, but poor steady-state performance.”

“Data aggregation is frequently done when monitoring rare events and for count data generally. For example, one might monitor the number of accidents per month in a plant or the number of patient falls per week in a hospital. […] Schuh et al. (2013) showed […] that there can be significantly long expected delays in detecting process deterioration when data are aggregated over time even when there are few samples with zero events. One can always aggregate data over long enough time periods to avoid zero counts, but the consequence is slower detection of increases in the rate of the adverse event. […] aggregating event data over fixed time intervals, as frequently done in practice, can result in significant delays in detecting increases in the rate of adverse events. […] Another type of aggregation is to wait until one has observed a given number of events before updating a control chart based on a proportion or waiting time. […] This type of aggregation […] does not appear to delay the detection of process changes nearly as much as aggregating data over fixed time periods. […] We believe that the adverse effect of aggregating data over time has not been fully appreciated in practice and more research work is needed on this topic. Only a couple of the most basic scenarios for count data have been studied. […] Virtually all of the work on monitoring the rate of rare events is based on the assumption that there is a sustained shift in the rate. In some applications the rate change may be transient. In this scenario other performance metrics would be needed, such as the probability of detecting the process shift during the transient period. The effect of data aggregation over time might be larger if shifts in the parameter are not sustained.”

Big data is a popular term that is used to describe the large, diverse, complex and/or longitudinal datasets generated from a variety of instruments, sensors and/or computer-based transactions. […] The acquisition of data does not automatically transfer to new knowledge about the system under study. […] To be able to gain knowledge from big data, it is imperative to understand both the scale and scope of big data. The challenges with processing and analyzing big data are not only limited to the size of the data. These challenges include the size, or volume, as well as the variety and velocity of the data (Zikopoulos et al. 2012). Known as the 3V’s, the volume, variety, and/or velocity of the data are the three main characteristics that distinguish big data from the data we have had in the past. […] Many have suggested that there are more V’s that are important to the big data problem such as veracity and value (IEEE BigData 2013). Veracity refers to the trustworthiness of the data, and value refers to the value that the data adds to creating knowledge about a topic or situation. While we agree that these are important data characteristics, we do not see these as key features that distinguish big data from regular data. It is important to evaluate the veracity and value of all data, both big and small. Both veracity and value are related to the concept of data quality, an important research area in the Information Systems (IS) literature for more than 50 years. The research literature discussing the aspects and measures of data quality is extensive in the IS field, but seems to have reached a general agreement that the multiple aspects of data quality can be grouped into several broad categories […]. Two of the categories relevant here are contextual and intrinsic dimensions of data quality. Contextual aspects of data quality are context specific measures that are subjective in nature, including concepts like value-added, believability, and relevance. […] Intrinsic aspects of data quality are more concrete in nature, and include four main dimensions: accuracy, timeliness, consistency, and completeness […] From our perspective, many of the contextual and intrinsic aspects of data quality are related to the veracity and value of the data. That said, big data presents new challenges in conceptualizing, evaluating, and monitoring data quality.”

The application of SPC methods to big data is similar in many ways to the application of SPC methods to regular data. However, many of the challenges inherent to properly studying and framing a problem can be more difficult in the presence of massive amounts of data. […] it is important to note that building the model is not the end-game. The actual use of the analysis in practice is the goal. Thus, some consideration needs to be given to the actual implementation of the statistical surveillance applications. This brings us to another important challenge, that of the complexity of many big data applications. SPC applications have a tradition of back of the napkin methods. The custom within SPC practice is the use of simple methods that are easy to explain like the Shewhart control chart. These are often the best methods to use to gain credibility because they are easy to understand and easy to explain to a non-statistical audience. However, big data often does not lend itself to easy-to-compute or easy-to-explain methods. While a control chart based on a neural net may work well, it may be so difficult to understand and explain that it may be abandoned for inferior, yet simpler methods. Thus, it is important to consider the dissemination and deployment of advanced analytical methods in order for them to be effectively used in practice. […] Another challenge in monitoring high dimensional data sets is the fact that not all of the monitored variables are likely to shift at the same time; thus, some method is necessary to identify the process variables that have changed. In high dimensional data sets, the decomposition methods used with multivariate control charts can become very computationally expensive. Several authors have considered variable selection methods combined with control charts to quickly detect process changes in a variety of practical scenarios including fault detection, multistage processes, and profile monitoring. […] All of these methods based on variable selection techniques are based on the idea of monitoring subsets of potentially faulty variables. […] Some variable reduction methods are needed to better identify shifts. We believe that further work in the areas combining variable selection methods and surveillance are important for quickly and efficiently diagnosing changes in high-dimensional data.

“A multiple stream process (MSP) is a process that generates several streams of output. From the statistical process control standpoint, the quality variable and its specifications are the same in all streams. A classical example is a filling process such as the ones found in beverage, cosmetics, pharmaceutical and chemical industries, where a filler machine may have many heads. […] Although multiple-stream processes are found very frequently in industry, the literature on schemes for the statistical control of such kind of processes is far from abundant. This paper presents a survey of the research on this topic. […] The first specific techniques for the statistical control of MSPs are the group control charts (GCCs) […] Clearly the chief motivation for these charts was to avoid the proliferation of control charts that would arise if every stream were controlled with a separate pair of charts (one for location and other for spread). Assuming the in-control distribution of the quality variable to be the same in all streams (an assumption which is sometimes too restrictive), the control limits should be the same for every stream. So, the basic idea is to build only one chart (or a pair of charts) with the information from all streams.”

“The GCC will work well if the values of the quality variable in the different streams are independent and identically distributed, that is, if there is no cross-correlation between streams. However, such an assumption is often unrealistic. In many real multiple-stream processes, the value of the observed quality variable is typically better described as the sum of two components: a common component (let’s refer to it as “mean level”), exhibiting variation that affects all streams in the same way, and the individual component of each stream, which corresponds to the difference between the stream observation and the common mean level. […] [T]he presence of the mean level component leads to reduced sensitivity of Boyd’s GCC to shifts in the individual component of a stream if the variance […] of the mean level is large with respect to the variance […] of the individual stream components. Moreover, the GCC is a Shewhart-type chart; if the data exhibit autocorrelation, the traditional form of estimating the process standard deviation (for establishing the control limits) based on the average range or average standard deviation of individual samples (even with the Bonferroni or Dunn-Sidak correction) will result in too frequent false alarms, due to the underestimation of the process total variance. […] [I]in the converse situation […] the GCC will have little sensitivity to causes that affect all streams — at least, less sensitivity than would have a chart on the average of the measurements across all streams, since this one would have tighter limits than the GCC. […] Therefore, to monitor MSPs with the two components described, Mortell and Runger (1995) proposed using two control charts: First, a chart for the grand average between streams, to monitor the mean level. […] For monitoring the individual stream components, they proposed using a special range chart (Rt chart), whose statistic is the range between streams, that is, the difference between the largest stream average and the smallest stream average […] the authors commented that both the chart on the average of all streams and the Rt chart can be used even when at each sampling time only a subset of the streams are sampled (provided that the number of streams sampled remains constant). The subset can be varied periodically or even chosen at random. […] it is common in practice to measure only a subset of streams at each sampling time, especially when the number of streams is large. […] Although almost the totality of Mortell and Runger’s paper is about the monitoring of the individual streams, the importance of the chart on the average of all streams for monitoring the mean level of the process cannot be overemphasized.”

“Epprecht and Barros (2013) studied a filling process application where the stream variances were similar, but the stream means differed, wandered, changed from day to day, were very difficult to adjust, and the production runs were too short to enable good estimation of the parameters of the individual streams. The solution adopted to control the process was to adjust the target above the nominal level to compensate for the variation between streams, as a function of the lower specification limit, of the desired false-alarm rate and of a point (shift, power) arbitrarily selected. This would be a MSP version of “acceptance control charts” (Montgomery 2012, Sect. 10.2) if taking samples with more than one observation per stream [is] feasible.”

Most research works consider a small to moderate number of streams. Some processes may have hundreds of streams, and in this case the issue of how to control the false-alarm rate while keeping enough detection power […] becomes a real problem. […] Real multiple-stream processes can be very ill-behaved. The author of this paper has seen a plant with six 20-stream filling processes in which the stream levels had different means and variances and could not be adjusted separately (one single pump and 20 hoses). For many real cases with particular twists like this one, it happens that no previous solution in the literature is applicable. […] The appropriateness and efficiency of [different monitoring methods] depends on the dynamic behaviour of the process over time, on the degree of cross-correlation between streams, on the ratio between the variabilities of the individual streams and of the common component (note that these three factors are interrelated), on the type and size of shifts that are likely and/or relevant to detect, on the ease or difficulty to adjust all streams in the same target, on the process capability, on the number of streams, on the feasibility of taking samples of more than one observation per stream at each sampling time (or even the feasibility of taking one observation of every stream at each sampling time!), on the length of the production runs, and so on. So, the first problem in a practical application is to characterize the process and select the appropriate monitoring scheme (or to adapt one, or to develop a new one). This analysis may not be trivial for the average practitioner in industry. […] Jirasettapong and Rojanarowan (2011) is the only work I have found on the issue of selecting the most suitable monitoring scheme for an MSP. It considers only a limited number of alternative schemes and a few aspects of the problem. More comprehensive analyses are needed.”

June 27, 2018 Posted by | Books, Data, Engineering, Statistics | Leave a comment

Oceans (II)

In this post I have added some more observations from the book and some more links related to the book‘s coverage.

“Almost all the surface waves we observe are generated by wind stress, acting either locally or far out to sea. Although the wave crests appear to move forwards with the wind, this does not occur. Mechanical energy, created by the original disturbance that caused the wave, travels through the ocean at the speed of the wave, whereas water does not. Individual molecules of water simply move back and forth, up and down, in a generally circular motion. […] The greater the wind force, the bigger the wave, the more energy stored within its bulk, and the more energy released when it eventually breaks. The amount of energy is enormous. Over long periods of time, whole coastlines retreat before the pounding waves – cliffs topple, rocks are worn to pebbles, pebbles to sand, and so on. Individual storm waves can exert instantaneous pressures of up to 30,000 kilograms […] per square metre. […] The rate at which energy is transferred across the ocean is the same as the velocity of the wave. […] waves typically travel at speeds of 30-40 kilometres per hour, and […] waves with a greater wavelength will travel faster than those with a shorter wavelength. […] With increasing wind speed and duration over which the wind blows, the wave height, period, and length all increase. The distance over which the wind blows is known as fetch, and is critical in influencing the growth of waves — the greater the area of ocean over which a storm blows, then the larger and more powerful the waves generated. The three stages in wave development are known as sea, swell, and surf. […] The ocean is highly efficient at transmitting energy. Water offers so little resistance to the small orbital motion of water particles in waves that individual wave trains may continue for thousands of kilometres. […] When the wave train encounters shallow water — say 50 metres for a 100-metre wavelength — the waves first feel the bottom and begin to slow down in response to frictional resistance. Wavelength decreases, the crests bunch closer together, and wave height increases until the wave becomes unstable and topples forwards as surf. […] Very often, waves approach obliquely to the coast and set up a significant transfer of water and sediment along the shoreline. The long-shore currents so developed can be very powerful, removing beach sand and building out spits and bars across the mouths of estuaries.” (People who’re interested in knowing more about these topics will probably enjoy Fredric Raichlen’s book on these topics – I did, US.)

“Wind is the principal force that drives surface currents, but the pattern of circulation results from a more complex interaction of wind drag, pressure gradients, and Coriolis deflection. Wind drag is a very inefficient process by which the momentum of moving air molecules is transmitted to water molecules at the ocean surface setting them in motion. The speed of water molecules (the current), initially in the direction of the wind, is only about 3–4 per cent of the wind speed. This means that a wind blowing constantly over a period of time at 50 kilometres per hour will produce a water current of about 1 knot (2 kilometres per hour). […] Although the movement of wind may seem random, changing from one day to the next, surface winds actually blow in a very regular pattern on a planetary scale. The subtropics are known for the trade winds with their strong easterly component, and the mid-latitudes for persistent westerlies. Wind drag by such large-scale wind systems sets the ocean waters in motion. The trade winds produce a pair of equatorial currents moving to the west in each ocean, while the westerlies drive a belt of currents that flow to the east at mid-latitudes in both hemispheres. […] Deflection by the Coriolis force and ultimately by the position of the continents creates very large oval-shaped gyres in each ocean.”

“The control exerted by the oceans is an integral and essential part of the global climate system. […] The oceans are one of the principal long-term stores on Earth for carbon and carbon dioxide […] The oceans are like a gigantic sponge holding fifty times more carbon dioxide than the atmosphere […] the sea surface acts as a two-way control valve for gas transfer, which opens and closes in response to two key properties – gas concentration and ocean stirring. First, the difference in gas concentration between the air and sea controls the direction and rate of gas exchange. Gas concentration in water depends on temperature—cold water dissolves more carbon dioxide than warm water, and on biological processes—such as photosynthesis and respiration by microscopic plants, animals, and bacteria that make up the plankton. These transfer processes affect all gases […]. Second, the strength of the ocean-stirring process, caused by wind and foaming waves, affects the ease with which gases are absorbed at the surface. More gas is absorbed during stormy weather and, once dissolved, is quickly mixed downwards by water turbulence. […] The transfer of heat, moisture, and other gases between the ocean and atmosphere drives small-scale oscillations in climate. The El Niño Southern Oscillation (ENSO) is the best known, causing 3–7-year climate cycles driven by the interaction of sea-surface temperature and trade winds along the equatorial Pacific. The effects are worldwide in their impact through a process of atmospheric teleconnection — causing floods in Europe and North America, monsoon failure and severe drought in India, South East Asia, and Australia, as well as decimation of the anchovy fishing industry off Peru.”

“Earth’s climate has not always been as it is today […] About 100 million years ago, for example, palm trees and crocodiles lived as far north as 80°N – the equivalent of Arctic Canada or northern Greenland today. […] Most of the geological past has enjoyed warm conditions. These have been interrupted at irregular intervals by cold and glacial climates of altogether shorter duration […][,] the last [of them] beginning around 3 million years ago. We are still in the grip of this last icehouse state, although in one of its relatively brief interglacial phases. […] Sea level has varied in the past in close consort with climate change […]. Around twenty-five thousand years ago, at the height of the last Ice Age, the global sea level was 120 metres lower than today. Huge tracts of the continental shelves that rim today’s landmasses were exposed. […] Further back in time, 80 million years ago, the sea level was around 250–350 metres higher than today, so that 82 per cent of the planet was ocean and only 18 per cent remained as dry land. Such changes have been the norm throughout geological history and entirely the result of natural causes.”

“Most of the solar energy absorbed by seawater is converted directly to heat, and water temperature is vital for the distribution and activity of life in the oceans. Whereas mean temperature ranges from 0 to 40 degrees Celsius, 90 per cent of the oceans are permanently below 5°C. Most marine animals are ectotherms (cold-blooded), which means that they obtain their body heat from their surroundings. They generally have narrow tolerance limits and are restricted to particular latitudinal belts or water depths. Marine mammals and birds are endotherms (warm-blooded), which means that their metabolism generates heat internally thereby allowing the organism to maintain constant body temperature. They can tolerate a much wider range of external conditions. Coping with the extreme (hydrostatic) pressure exerted at depth within the ocean is a challenge. For every 30 metres of water, the pressure increases by 3 atmospheres – roughly equivalent to the weight of an elephant.”

“There are at least 6000 different species of diatom. […] An average litre of surface water from the ocean contains over half a million diatoms and other unicellular phytoplankton and many thousands of zooplankton.”

“Several different styles of movement are used by marine organisms. These include floating, swimming, jet propulsion, creeping, crawling, and burrowing. […] The particular physical properties of water that most affect movement are density, viscosity, and buoyancy. Seawater is about 800 times denser than air and nearly 100 times more viscous. Consequently there is much more resistance on movement than on land […] Most large marine animals, including all fishes and mammals, have adopted some form of active swimming […]. Swimming efficiency in fishes has been achieved by minimizing the three types of drag resistance created by friction, turbulence, and body form. To reduce surface friction, the body must be smooth and rounded like a sphere. The scales of most fish are also covered with slime as further lubrication. To reduce form drag, the cross-sectional area of the body should be minimal — a pencil shape is ideal. To reduce the turbulent drag as water flows around the moving body, a rounded front end and tapered rear is required. […] Fins play a versatile role in the movement of a fish. There are several types including dorsal fins along the back, caudal or tail fins, and anal fins on the belly just behind the anus. Operating together, the beating fins provide stability and steering, forwards and reverse propulsion, and braking. They also help determine whether the motion is up or down, forwards or backwards.”

Links:

Rip current.
Rogue wave. Agulhas Current. Kuroshio Current.
Tsunami.
Tide. Tidal range.
Geostrophic current.
Ekman Spiral. Ekman transport. Upwelling.
Global thermohaline circulation system. Antarctic bottom water. North Atlantic Deep Water.
Rio Grande Rise.
Denmark Strait. Denmark Strait cataract (/waterfall?).
Atmospheric circulation. Jet streams.
Monsoon.
Cyclone. Tropical cyclone.
Ozone layer. Ozone depletion.
Milankovitch cycles.
Little Ice Age.
Oxygen Isotope Stratigraphy of the Oceans.
Contourite.
Earliest known life forms. Cyanobacteria. Prokaryote. Eukaryote. Multicellular organism. Microbial mat. Ediacaran. Cambrian explosion. Pikaia. Vertebrate. Major extinction events. Permian–Triassic extinction event. (The author seems to disagree with the authors of this article about potential causes, in particular in so far as they relate to the formation of Pangaea – as I felt uncertain about the accuracy of the claims made in the book I decided against covering this topic in this post, even though I find it interesting).
Tethys Ocean.
Plesiosauria. Pliosauroidea. Ichthyosaur. Ammonoidea. Belemnites. Pachyaena. Cetacea.
Pelagic zone. Nekton. Benthic zone. Neritic zone. Oceanic zone. Bathyal zone. Hadal zone.
Phytoplankton. Silicoflagellates. Coccolithophore. Dinoflagellate. Zooplankton. Protozoa. Tintinnid. Radiolaria. Copepods. Krill. Bivalves.
Elasmobranchii.
Ampullae of Lorenzini. Lateral line.
Baleen whale. Humpback whale.
Coral reef.
Box jellyfish. Stonefish.
Horseshoe crab.
Greenland shark. Giant squid.
Hydrothermal vent. Pompeii worms.
Atlantis II Deep. Aragonite. Phosphorite. Deep sea mining. Oil platform. Methane clathrate.
Ocean thermal energy conversion. Tidal barrage.
Mariculture.
Exxon Valdez oil spill.
Bottom trawling.

June 24, 2018 Posted by | Biology, Books, Engineering, Geology, Paleontology, Physics | Leave a comment

Robotics

“This book is not about the psychology or cultural anthropology of robotics, interesting as those are. I am an engineer and roboticist, so I confine myself firmly to the technology and application of real physical robots. […] robotics is the study of the design, application, and use of robots, and that is precisely what this Very Short Introduction is about: what robots do and what roboticists do.”

The above quote is from the book‘s preface; the book is quite decent and occasionally really quite fascinating. Below I have added some sample quotes and links to topics/stuff covered in the book.

“Some of all of […] five functions – sensing, signalling, moving, intelligence, and energy, integrated into a body – are present in all robots. The actual sensors, motors, and behaviours designed into a particular robot body shape depend on the job that robot is designed to do. […] A robot is: 1. an artificial device that can sense its environment and purposefully act on or in that environment; 2. an embodied artificial intelligence; or 3. a machine that can autonomously carry out useful work. […] Many real-world robots […] are not autonomous but remotely operated by humans. […] These are also known as tele-operated robots. […] From a robot design point of view, the huge advantage of tele-operated robots is that the human in the loop provides the robot’s ‘intelligence’. One of the most difficult problems in robotics — the design of the robot’s artificial intelligence — is therefore solved, so it’s not surprising that so many real-world robots are tele-operated. The fact that tele-operated robots alleviate the problem of AI design should not fool us into making the mistake of thinking that tele-operated robots are not sophisticated — they are. […] counter-intuitively, autonomous robots are often simpler than tele-operated robots […] When roboticists talk about autonomous robots they normally mean robots that decide what to do next entirely without human intervention or control. We need to be careful here because they are not talking about true autonomy, in the sense that you or I would regard ourselves as self-determining individuals, but what I would call ‘control autonomy’. By control autonomy I mean that the robot can undertake its task, or mission, without human intervention, but that mission is still programmed or commanded by a human. In fact, there are very few robots in use in the real world that are autonomous even in this limited sense. […] It is helpful to think about a spectrum of robot autonomy, from remotely operated at one end (no autonomy) to fully autonomous at the other. We can then place robots on this spectrum according to their degree of autonomy. […] On a scale of autonomy, a robot that can react on its own in response to its sensors is highly autonomous. A robot that cannot react, perhaps because it doesn’t have any sensors, is not.”

“It is […] important to note that autonomy and intelligence are not the same thing. A robot can be autonomous but not very smart, like a robot vacuum cleaner. […] A robot vacuum cleaner has a small number of preprogrammed (i.e. instinctive) behaviours and is not capable of any kind of learning […] These are characteristics we would associate with very simple animals. […] When roboticists describe a robot as intelligent, what they mean is ‘a robot that behaves, in some limited sense, as if it were intelligent’. The words as if are important here. […] There are basically two ways in which we can make a robot behave as if it is more intelligent: 1. preprogram a larger number of (instinctive) behaviours; and/or 2. design the robot so that it can learn and therefore develop and grow its own intelligence. The first of these approaches is fine, providing that we know everything there is to know about what the robot must do and all of the situations it will have to respond to while it is working. Typically we can only do this if we design both the robot and its operational environment. […] For unstructured environments, the first approach to robot intelligence above is infeasible simply because it’s impossible to anticipate every possible situation a robot might encounter, especially if it has to interact with humans. The only solution is to design a robot so that it can learn, either from its own experience or from humans or other robots, and therefore adapt and develop its own intelligence: in effect, grow its behavioural repertoire to be able to respond appropriately to more and more situations. This brings us to the subject of learning robots […] robot learning or, more generally, ‘machine learning’ — a branch of AI — has proven to be very much harder than was expected in the early days of Artificial Intelligence.”

“Robot arms on an assembly line are typically programmed to go through a fixed sequence of moves over and over again, for instance spot-welding car body panels, or spray-painting the complete car. These robots are therefore not intelligent. In fact, they often have no exteroceptive sensors at all. […] when we see an assembly line with multiple robot arms positioned on either side along a line, we need to understand that the robots are part of an integrated automated manufacturing system, in which each robot and the line itself have to be carefully programmed in order to coordinate and choreograph the whole operation. […] An important characteristic of assembly-line robots is that they require the working environment to be designed for and around them, i.e. a structured environment. They also need that working environment to be absolutely predictable and repeatable. […] Robot arms either need to be painstakingly programmed, so that the precise movement required of each joint is worked out and coded into a set of instructions for the robot arm or, more often (and rather more easily), ‘taught’ by a human using a control pad to move its end-effector (hand) to the required positions in the robot’s workspace. The robot then memorizes the set of joint movements so that they can be replayed (over and over again). The human operator teaching the robot controls the trajectory, i.e. the path the robot arm’s end-effector follows as it moves through its 3D workspace, and a set of mathematical equations called the ‘inverse kinematics’ converts the trajectory into a set of individual joint movements. Using this approach, it is relatively easy to teach a robot arm to pick up an object and move it smoothly to somewhere else in its workspace while keeping the object level […]. However […] most real-world robot arms are unable to sense the weight of the object and automatically adjust accordingly. They are simply designed with stiff enough joints and strong enough motors that, whatever the weight of the object (providing it’s within the robot’s design limits), it can be lifted, moved, and placed with equal precision. […] The robot arm and gripper are a foundational technology in robotics. Not only are they extremely important as […] industrial assembly-line robot[s], but they have become a ‘component’ in many areas of robotics.”

Planetary rovers are tele-operated mobile robots that present the designer and operator with a number of very difficult challenges. One challenge is power: a planetary rover needs to be energetically self-sufficient for the lifetime of its mission, and must either be launched with a power source or — as in the case of the Mars rovers — fitted with solar panels capable of recharging the rover’s on-board batteries. Another challenge is dependability. Any mechanical fault is likely to mean the end of the rover’s mission, so it needs to be designed and built to exceptional standards of reliability and fail-safety, so that if parts of the rover should fail, the robot can still operate, albeit with reduced functionality. Extremes of temperature are also a problem […] But the greatest challenge is communication. With a round-trip signal delay time of twenty minutes to Mars and back, tele-operating the rover in real time is impossible. If the rover is moving and its human operator in the command centre on Earth reacts to an obstacle, it’s likely to be already too late; the robot will have hit the obstacle by the time the command signal to turn reaches the rover. An obvious answer to this problem would seem to be to give the rover a degree of autonomy so that it could, for instance, plan a path to a rock or feature of interest — while avoiding obstacles — then, when it arrives at the point of interest, call home and wait. Although path-planning algorithms capable of this level of autonomy have been well developed, the risk of a failure of the algorithm (and hence perhaps the whole mission) is deemed so high that in practice the rovers are manually tele-operated, at very low speed, with each manual manoeuvre carefully planned. When one also takes into account the fact that the Mars rovers are contactable only for a three-hour window per Martian day, a traverse of 100 metres will typically take up one day of operation at an average speed of 30 metres per hour.”

“The realization that the behaviour of an autonomous robot is an emergent property of its interactions with the world has important and far-reaching consequences for the way we design autonomous robots. […] when we design robots, and especially when we come to decide what behaviours to programme the robot’s AI with, we cannot think about the robot on its own. We must take into account every detail of the robot’s working environment. […] Like all machines, robots need power. For fixed robots, like the robot arms used for manufacture, power isn’t a problem because the robot is connected to the electrical mains supply. But for mobile robots power is a huge problem because mobile robots need to carry their energy supply around with them, with problems of both the size and weight of the batteries and, more seriously, how to recharge those batteries when they run out. For autonomous robots, the problem is acute because a robot cannot be said to be truly autonomous unless it has energy autonomy as well as computational autonomy; there seems little point in building a smart robot that ‘dies’ when its battery runs out. […] Localization is a[nother] major problem in mobile robotics; in other words, how does a robot know where it is, in 2D or 3D space. […] [One] type of robot learning is called reinforcement learning. […] it is a kind of conditioned learning. If a robot is able to try out several different behaviours, test the success or failure of each behaviour, then ‘reinforce’ the successful behaviours, it is said to have reinforcement learning. Although this sounds straightforward in principle, it is not. It assumes, first, that a robot has at least one successful behaviour in its list of behaviours to try out, and second, that it can test the benefit of each behaviour — in other words, that the behaviour has an immediate measurable reward. If a robot has to try every possible behaviour or if the rewards are delayed, then this kind of so-called ‘unsupervised’ individual robot learning is very slow.”

“A robot is described as humanoid if it has a shape or structure that to some degree mimics the human form. […] A small subset of humanoid robots […] attempt a greater degree of fidelity to the human form and appearance, and these are referred to as android. […] It is a recurring theme of this book that robot intelligence technology lags behind robot mechatronics – and nowhere is the mismatch between the two so starkly evident as it is in android robots. The problem is that if a robot looks convincingly human, then we (not unreasonably) expect it to behave like a human. For this reason whole-body android robots are, at the time of writing, disappointing. […] It is important not to overstate the case for humanoid robots. Without doubt, many potential applications of robots in human work- or living spaces would be better served by non-humanoid robots. The humanoid robot to use human tools argument doesn’t make sense if the job can be done autonomously. It would be absurd, for instance, to design a humanoid robot in order to operate a vacuum cleaner designed for humans. Similarly, if we want a driverless car, it doesn’t make sense to build a humanoid robot that sits in the driver’s seat. It seems that the case for humanoid robots is strongest when the robots are required to work alongside, learn from, and interact closely with humans. […] One of the most compelling reasons why robots should be humanoid is for those applications in which the robot has to interact with humans, work in human workspaces, and use tools or devices designed for humans.”

“…to put it bluntly, sex with a robot might not be safe. As soon as a robot has motors and moving parts, then assuring the safety of human-robot interaction becomes a difficult problem and if that interaction is intimate, the consequences of a mechanical or control systems failure could be serious.”

“All of the potential applications of humanoid robots […] have one thing in common: close interaction between human and robot. The nature of that interaction will be characterized by close proximity and communication via natural human interfaces – speech, gesture, and body language. Human and robot may or may not need to come into physical contact, but even when direct contact is not required they will still need to be within each other’s body space. It follows that robot safety, dependability, and trustworthiness are major issues for the robot designer. […] making a robot safe isn’t the same as making it trustworthy. One person trusts another if, generally speaking, that person is reliable and does what they say they will. So if I were to provide a robot that helps to look after your grandmother and I claim that it is perfectly safe — that it’s been designed to cover every risk or hazard — would you trust it? The answer is probably not. Trust in robots, just as in humans, has to be earned. […for more on these topics, see this post – US] […] trustworthiness cannot just be designed into the robot — it has to be earned by use and by experience. Consider a robot intended to fetch drinks for an elderly person. Imagine that the person calls for a glass of water. The robot then needs to fetch the drink, which may well require the robot to find a glass and fill it with water. Those tasks require sensing, dexterity, and physical manipulation, but they are problems that can be solved with current technology. The problem of trust arises when the robot brings the glass of water to the human. How does the robot give the glass to the human? If the robot has an arm so that it can hold out the glass in the same way a human would, how would the robot know when to let go? The robot clearly needs sensors in order to see and feel when the human has taken hold of the glass. The physical process of a robot handing something to a person is fraught with difficulty. Imagine, for instance, that the robot holds out its arm with the glass but the human can’t reach the glass. How does the robot decide where and how far it would be safe to bring its arm toward the person? What if the human takes hold of the glass but then the glass slips; does the robot let it fall or should it — as a human would — renew its grip on the glass? At what point would the robot decide the transaction has failed: it can’t give the glass of water to the person, or they won’t take it; perhaps they are asleep, or simply forgotten they wanted a glass of water, or confused. How does the robot sense that it should give up and perhaps call for assistance? These are difficult problems in robot cognition. Until they are solved, it’s doubtful we could trust a robot sufficiently well to do even a seemingly simple thing like handing over a glass of water.”

“The fundamental problem with Asimov’s laws of robotics, or any similar construction, is that they require the robot to make judgments. […] they assume that the robot is capable of some level of moral agency. […] No robot that we can currently build, or will build in the foreseeable future, is ‘intelligent’ enough to be able to even recognize, let alone make, these kinds of choices. […] Most roboticists agree that for the foreseeable future robots cannot be ethical, moral agents. […] precisely because, as we have seen, present-day ‘intelligent’ robots are not very intelligent, there is a danger of a gap between what robot users believe those robots to be capable of and what they are actually capable of. Given humans’ propensity to anthropomorphize and form emotional attachments to machines, there is clearly a danger that such vulnerabilities could be either unwittingly or deliberately exploited. Although robots cannot be ethical, roboticists should be.”

“In robotics research, the simulator has become an essential tool of the roboticist’s trade. The reason for this is that designing, building, and testing successive versions of real robots is both expensive and time-consuming, and if part of that work can be undertaken in the virtual rather than the real world, development times can be shortened, and the chances of a robot that works first time substantially improved. A robot simulator has three essential features. First, it must provide a virtual world. Second, it must offer a facility for creating a virtual model of the real robot. And third, it must allow the robot’s controller to be installed and ‘run’ on the virtual robot in the virtual world; the controller then determines how the robot behaves when running in the simulator. The simulator should also provide a visualization of the virtual world and simulated robots in it so that the designer can see what’s going on. […] These are difficult challenges for developers of robot simulators.”

“The next big step in miniaturization […] requires the solution of hugely difficult problems and, in all likelihood, the use of exotic approaches to design and fabrication. […] It is impossible to shrink mechanical and electrical components, or MEMS devices, in order to reduce total robot size to a few micrometres. In any event, the physics of locomotion through a fluid changes at the microscale and simply shrinking mechanical components from macro to micro — even if it were possible — would fail to address this problem. A radical approach is to leave behind conventional materials and components and move to a bioengineered approach in which natural bacteria are modified by adding artificial components. The result is a hybrid of artificial and natural (biological) components. The bacterium has many desirable properties for a microbot. By selecting a bacterium with a flagellum, we have locomotion perfectly suited to the medium. […] Another hugely desirable characteristic is that the bacteria are able to naturally scavenge for energy, thus avoiding the otherwise serious problem of powering the microbots. […] Whatever technology is used to create the microbots, huge problems would have to be overcome before a swarm of medical microbots could become a practical reality. The first is technical: how do surgeons or medical technicians reliably control and monitor the swarm while it’s working inside the body? Or, assuming we can give the microbots sufficient intelligence and autonomy (also a very difficult challenge), do we forgo precise control and human intervention altogether by giving the robots the swarm intelligence to be able to do the job, i.e. find the problem, fix it, then exit? […] these questions bring us to what would undoubtedly represent the greatest challenge: validating the swarm of medical microbots as effective, dependable, and above all safe, then gaining approval and public acceptance for its use. […] Do we treat the validation of the medical microbot swarm as an engineering problem, and attempt to apply the same kinds of methods we would use to validate safety-critical systems such as air traffic control systems? Or do we instead regard the medical microbot swarm as a drug and validate it with conventional and (by and large) trusted processes, including clinical trials, leading to approval and licensing for use? My suspicion is that we will need a new combination of both approaches.”

Links:

E-puck mobile robot.
Jacques de Vaucanson’s Digesting Duck.
Cybernetics.
Alan Turing. W. Ross Ashby. Norbert Wiener. Warren McCulloch. William Grey Walter.
Turtle (robot).
Industrial robot. Mechanical arm. Robotic arm. Robot end effector.
Automated guided vehicle.
Remotely operated vehicle. Unmanned aerial vehicle. Remotely operated underwater vehicle. Wheelbarrow (robot).
Robot-assisted surgery.
Lego Mindstorms NXT. NXT Intelligent Brick.
Biomimetic robots.
Artificial life.
Braitenberg vehicle.
Shakey the robot. Sense-Plan-Act. Rodney Brooks. A robust layered control system for a mobile robot.
Toto the robot.
Slugbot. Ecobot. Microbial fuel cell.
Scratchbot.
Simultaneous localization and mapping (SLAM).
Programming by demonstration.
Evolutionary algorithm.
NASA Robonaut. BERT 2. Kismet (robot). Jules (robot). Frubber. Uncanny valley.
AIBO. Paro.
Cronos Robot. ECCEROBOT.
Swarm robotics. S-bot mobile robot. Swarmanoid project.
Artificial neural network.
Symbrion.
Webots.
Kilobot.
Microelectromechanical systems. I-SWARM project.
ALICE (Artificial Linguistic Internet Computer Entity). BINA 48 (Breakthrough Intelligence via Neural Architecture 48).

June 15, 2018 Posted by | Books, Computer science, Engineering, Medicine | Leave a comment

Computers, People and the Real World

“An exploration of some spectacular failures of modern day computer-aided systems, which fail to take into account the real-world […] Almost nobody wants an IT system. What they want is a better way of doing something, whether that is buying and selling shares on the Stock Exchange, flying an airliner or running a hospital. So the system they want will usually involve changes to the way people work, and interactions with physical objects and the environment. Drawing on examples including the new programme for IT in the NHS, this lecture explores what can go wrong when business change is mistakenly viewed as an IT project.” (Quote from the video description on youtube).

Some links related to the lecture coverage:
Computer-aided dispatch.
London Ambulance Service – computerization.
Report of the Inquiry Into The London Ambulance Service (February 1993).
Sociotechnical system.
Tay (bot).

A few observations/quotes from the lecture (-notes):

The bidder who least understands the complexity of a requirement is likely to put in the lowest bid.
“It is a mistake to use a computer system to impose new work processes on under-trained or reluctant staff. – Front line staff are often best placed to judge what is practical.” [A quote from later in the lecture [~36 mins] is even more explicit: “The experts in any work process are usually the people who have been carrying it out.”]
“It is important to understand that in any system implementation the people factor is as important, and arguably more important, than the technical infrastructure.” (This last one is a full quote from the report linked above; the lecture includes a shortened version – US) [Quotes and observations above from ~16 minute mark unless otherwise noted]

“There is no such thing as an IT project”
“(almost) every significant “IT Project” is actually a business change project that is enabled and supported by one or more IT systems. Business processes are expensive to change. The business changes take at least as long and cost as much as the new IT system, and need at least as much planning and management” [~29 mins]

“Software packages are packaged business processes
*Changing a package to fit the way you want to work can cost more than writing new software” [~31-32 mins]

“Most computer systems interact with people: the sociotechnical view is that the people and the IT are two components of a larger system. Designing that larger system is the real task.” [~36 mins]

May 31, 2018 Posted by | Computer science, Economics, Engineering, Lectures | Leave a comment

Molecular biology (II)

Below I have added some more quotes and links related to the book’s coverage:

“[P]roteins are the most abundant molecules in the body except for water. […] Proteins make up half the dry weight of a cell whereas DNA and RNA make up only 3 per cent and 20 per cent respectively. […] The approximately 20,000 protein-coding genes in the human genome can, by alternative splicing, multiple translation starts, and post-translational modifications, produce over 1,000,000 different proteins, collectively called ‘the proteome‘. It is the size of the proteome and not the genome that defines the complexity of an organism. […] For simple organisms, such as viruses, all the proteins coded by their genome can be deduced from its sequence and these comprise the viral proteome. However for higher organisms the complete proteome is far larger than the genome […] For these organisms not all the proteins coded by the genome are found in any one tissue at any one time and therefore a partial proteome is usually studied. What are of interest are those proteins that are expressed in specific cell types under defined conditions.”

“Enzymes are proteins that catalyze or alter the rate of chemical reactions […] Enzymes can speed up reactions […] but they can also slow some reactions down. Proteins play a number of other critical roles. They are involved in maintaining cell shape and providing structural support to connective tissues like cartilage and bone. Specialized proteins such as actin and myosin are required [for] muscular movement. Other proteins act as ‘messengers’ relaying signals to regulate and coordinate various cell processes, e.g. the hormone insulin. Yet another class of protein is the antibodies, produced in response to foreign agents such as bacteria, fungi, and viruses.”

“Proteins are composed of amino acids. Amino acids are organic compounds with […] an amino group […] and a carboxyl group […] In addition, amino acids carry various side chains that give them their individual functions. The twenty-two amino acids found in proteins are called proteinogenic […] but other amino acids exist that are non-protein functioning. […] A peptide bond is formed between two amino acids by the removal of a water molecule. […] each individual unit in a peptide or protein is known as an amino acid residue. […] Chains of less than 50-70 amino acid residues are known as peptides or polypeptides and >50-70 as proteins, although many proteins are composed of more than one polypeptide chain. […] Proteins are macromolecules consisting of one or more strings of amino acids folded into highly specific 3D-structures. Each amino acid has a different size and carries a different side group. It is the nature of the different side groups that facilitates the correct folding of a polypeptide chain into a functional tertiary protein structure.”

“Atoms scatter the waves of X-rays mainly through their electrons, thus forming secondary or reflected waves. The pattern of X-rays diffracted by the atoms in the protein can be captured on a photographic plate or an image sensor such as a charge coupled device placed behind the crystal. The pattern and relative intensity of the spots on the diffraction image are then used to calculate the arrangement of atoms in the original protein. Complex data processing is required to convert the series of 2D diffraction or scatter patterns into a 3D image of the protein. […] The continued success and significance of this technique for molecular biology is witnessed by the fact that almost 100,000 structures of biological molecules have been determined this way, of which most are proteins.”

“The number of proteins in higher organisms far exceeds the number of known coding genes. The fact that many proteins carry out multiple functions but in a regulated manner is one way a complex proteome arises without increasing the number of genes. Proteins that performed a single role in the ancestral organism have acquired extra and often disparate functions through evolution. […] The active site of an enzyme employed in catalysis is only a small part of the protein, leaving spare capacity for acquiring a second function. […] The glycolytic pathway is involved in the breakdown of sugars such as glucose to release energy. Many of the highly conserved and ancient enzymes from this pathway have developed secondary or ‘moonlighting’ functions. Proteins often change their location in the cell in order to perform a ‘second job’. […] The limited size of the genome may not be the only evolutionary pressure for proteins to moonlight. Combining two functions in one protein can have the advantage of coordinating multiple activities in a cell, enabling it to respond quickly to changes in the environment without the need for lengthy transcription and translational processes.”

Post-translational modifications (PTMs) […] is [a] process that can modify the role of a protein by addition of chemical groups to amino acids in the peptide chain after translation. Addition of phosphate groups (phosphorylation), for example, is a common mechanism for activating or deactivating an enzyme. Other common PTMs include addition of acetyl groups (acetylation), glucose (glucosylation), or methyl groups (methylation). […] Some additions are reversible, facilitating the switching between active and inactive states, and others are irreversible such as marking a protein for destruction by ubiquitin. [The difference between reversible and irreversible modifications can be quite important in pharmacology, and if you’re curious to know more about these topics Coleman’s drug metabolism text provide great coverage of related topics – US.] Diseases caused by malfunction of these modifications highlight the importance of PTMs. […] in diabetes [h]igh blood glucose lead to unwanted glocosylation of proteins. At the high glucose concentrations associated with diabetes, an unwanted irreversible chemical reaction binds the gllucose to amino acid residues such as lysines exposed on the protein surface. The glucosylated proteins then behave badly, cross-linking themselves to the extracellular matrix. This is particularly dangerous in the kidney where it decreases function and can lead to renal failure.”

“Twenty thousand protein-coding genes make up the human genome but for any given cell only about half of these are expressed. […] Many genes get switched off during differentiation and a major mechanism for this is epigenetics. […] an epigenetic trait […] is ‘a stably heritable phenotype resulting from changes in the chromosome without alterations in the DNA sequence’. Epigenetics involves the chemical alteration of DNA by methyl or other small molecular groups to affect the accessibility of a gene by the transcription machinery […] Epigenetics can […] act on gene expression without affecting the stability of the genetic code by modifying the DNA, the histones in chromatin, or a whole chromosome. […] Epigenetic signatures are not only passed on to somatic daughter cells but they can also be transferred through the germline to the offspring. […] At first the evidence appeared circumstantial but more recent studies have provided direct proof of epigenetic changes involving gene methylation being inherited. Rodent models have provided mechanistic evidence. […] the importance of epigenetics in development is highlighted by the fact that low dietary folate, a nutrient essential for methylation, has been linked to higher risk of birth defects in the offspring.” […on the other hand, well…]

The cell cycle is divided into phases […] Transition from G1 into S phase commits the cell to division and is therefore a very tightly controlled restriction point. Withdrawal of growth factors, insufficient nucleotides, or energy to complete DNA replication, or even a damaged template DNA, would compromise the process. Problems are therefore detected and the cell cycle halted by cell cycle inhibitors before the cell has committed to DNA duplication. […] The cell cycle inhibitors inactive the kinases that promote transition through the phases, thus halting the cell cycle. […] The cell cycle can also be paused in S phase to allow time for DNA repairs to be carried out before cell division. The consequences of uncontrolled cell division are so catastrophic that evolution has provided complex checks and balances to maintain fidelity. The price of failure is apoptosis […] 50 to 70 billion cells die every day in a human adult by the controlled molecular process of apoptosis.”

“There are many diseases that arise because a particular protein is either absent or a faulty protein is produced. Administering a correct version of that protein can treat these patients. The first commercially available recombinant protein to be produced for medical use was human insulin to treat diabetes mellitus. […] (FDA) approved the recombinant insulin for clinical use in 1982. Since then over 300 protein-based recombinant pharmaceuticals have been licensed by the FDA and the European Medicines Agency (EMA) […], and many more are undergoing clinical trials. Therapeutic proteins can be produced in bacterial cells but more often mammalian cells such as the Chinese hamster ovary cell line and human fibroblasts are used as these hosts are better able to produce fully functional human protein. However, using mammalian cells is extremely expensive and an alternative is to use live animals or plants. This is called molecular pharming and is an innovative way of producing large amounts of protein relatively cheaply. […] In plant pharming, tobacco, rice, maize, potato, carrots, and tomatoes have all been used to produce therapeutic proteins. […] [One] class of proteins that can be engineered using gene-cloning technology is therapeutic antibodies. […] Therapeutic antibodies are designed to be monoclonal, that is, they are engineered so that they are specific for a particular antigen to which they bind, to block the antigen’s harmful effects. […] Monoclonal antibodies are at the forefront of biological therapeutics as they are highly specific and tend not to induce major side effects.”

“In gene therapy the aim is to restore the function of a faulty gene by introducing a correct version of that gene. […] a cloned gene is transferred into the cells of a patient. Once inside the cell, the protein encoded by the gene is produced and the defect is corrected. […] there are major hurdles to be overcome for gene therapy to be effective. One is the gene construct has to be delivered to the diseased cells or tissues. This can often be difficult […] Mammalian cells […] have complex mechanisms that have evolved to prevent unwanted material such as foreign DNA getting in. Second, introduction of any genetic construct is likely to trigger the patient’s immune response, which can be fatal […] once delivered, expression of the gene product has to be sustained to be effective. One approach to delivering genes to the cells is to use genetically engineered viruses constructed so that most of the viral genome is deleted […] Once inside the cell, some viral vectors such as the retroviruses integrate into the host genome […]. This is an advantage as it provides long-lasting expression of the gene product. However, it also poses a safety risk, as there is little control over where the viral vector will insert into the patient’s genome. If the insertion occurs within a coding gene, this may inactivate gene function. If it integrates close to transcriptional start sites, where promoters and enhancer sequences are located, inappropriate gene expression can occur. This was observed in early gene therapy trials [where some patients who got this type of treatment developed cancer as a result of it. A few more details hereUS] […] Adeno-associated viruses (AAVs) […] are often used in gene therapy applications as they are non-infectious, induce only a minimal immune response, and can be engineered to integrate into the host genome […] However, AAVs can only carry a small gene insert and so are limited to use with genes that are of a small size. […] An alternative delivery system to viruses is to package the DNA into liposomes that are then taken up by the cells. This is safer than using viruses as liposomes do not integrate into the host genome and are not very immunogenic. However, liposome uptake by the cells can be less efficient, resulting in lower expression of the gene.”

Links:

One gene–one enzyme hypothesis.
Molecular chaperone.
Protein turnover.
Isoelectric point.
Gel electrophoresis. Polyacrylamide.
Two-dimensional gel electrophoresis.
Mass spectrometry.
Proteomics.
Peptide mass fingerprinting.
Worldwide Protein Data Bank.
Nuclear magnetic resonance spectroscopy of proteins.
Immunoglobulins. Epitope.
Western blot.
Immunohistochemistry.
Crystallin. β-catenin.
Protein isoform.
Prion.
Gene expression. Transcriptional regulation. Chromatin. Transcription factor. Gene silencing. Histone. NF-κB. Chromatin immunoprecipitation.
The agouti mouse model.
X-inactive specific transcript (Xist).
Cell cycle. Cyclin. Cyclin-dependent kinase.
Retinoblastoma protein pRb.
Cytochrome c. CaspaseBcl-2 family. Bcl-2-associated X protein.
Hybridoma technology. Muromonab-CD3.
Recombinant vaccines and the development of new vaccine strategies.
Knockout mouse.
Adenovirus Vectors for Gene Therapy, Vaccination and Cancer Gene Therapy.
Genetically modified food. Bacillus thuringiensis. Golden rice.

 

May 29, 2018 Posted by | Biology, Books, Chemistry, Diabetes, Engineering, Genetics, Immunology, Medicine, Molecular biology, Pharmacology | Leave a comment

Structural engineering

“The purpose of the book is three-fold. First, I aim to help the general reader appreciate the nature of structure, the role of the structural engineer in man-made structures, and understand better the relationship between architecture and engineering. Second, I provide an overview of how structures work: how they stand up to the various demands made of them. Third, I give students and prospective students in engineering, architecture, and science access to perspectives and qualitative understanding of advanced modern structures — going well beyond the simple statics of most introductory texts. […] Structural engineering is an important part of almost all undergraduate courses in engineering. This book is novel in the use of ‘thought-experiments’ as a straightforward way of explaining some of the important concepts that students often find the most difficult. These include virtual work, strain energy, and maximum and minimum energy principles, all of which are basic to modern computational techniques. The focus is on gaining understanding without the distraction of mathematical detail. The book is therefore particularly relevant for students of civil, mechanical, aeronautical, and aerospace engineering but, of course, it does not cover all of the theoretical detail necessary for completing such courses.”

The above quote is from the book‘s preface. I gave the book 2 stars on goodreads, and I must say that I think David Muir Wood’s book in this series on a similar and closely overlapping topic, civil engineering, was just a significantly better book – if you’re planning on reading only one book on these topics, in my opinion you should pick Wood’s book. I have two main complaints against this book: There’s too much stuff about the aesthetic properties of structures, and the history- and development of the differences between architecture and engineering; and the author seems to think it’s no problem covering quite complicated topics with just analogies and thought experiments, without showing you any of the equations. As for the first point, I don’t really have any interest in aesthetics or architectural history; as for the second, I can handle math reasonably well, but I usually have trouble when people insist on hiding the equations from me and talking only ‘in images’. The absence of equations doesn’t mean the topic coverage is dumbed-down, much; it’s rather the case that the author is trying to cover the sort of material that we usually use mathematics to talk about, because this is the most efficient language to use, using different kinds of language; the problem is that things get lost in the translation. He got rid of the math, but not the complexity. The book does include many illustrations as well, including illustrations of some quite complicated topics and dynamics, but some of the things he talks about in the book are things you can’t illustrate well with images because you ‘run out of dimensions’ before you’ve handled all the relevant aspects/dynamics, an admission he himself makes in the book.

Anyway, the book is not terrible and there’s some interesting stuff in there. I’ve added a few more quotes and some links related to the book’s coverage below.

“All structures span a gap or a space of some kind and their primary role is to transmit the imposed forces safely. A bridge spans an obstruction like a road or a river. The roof truss of a house spans the rooms of the house. The fuselage of a jumbo jet spans between wheels of its undercarriage on the tarmac of an airport terminal and the self-weight, lift and drag forces in flight. The hull of a ship spans between the variable buoyancy forces caused by the waves of the sea. To be fit for purpose every structure has to cope with specific local conditions and perform inside acceptable boundaries of behaviour—which engineers call ‘limit states’. […] Safety is paramount in two ways. First, the risk of a structure totally losing its structural integrity must be very low—for example a building must not collapse or a ship break up. This maximum level of performance is called an ultimate limit state. If a structure should reach that state for whatever reason then the structural engineer tries to ensure that the collapse or break up is not sudden—that there is some degree of warning—but this is not always possible […] Second, structures must be able to do what they were built for—this is called serviceability or performance limit state. So for example a skyscraper building must not sway so much that it causes discomfort to the occupants, even if the risk of total collapse is still very small.”

“At its simplest force is a pull (tension) or a push (compression). […] There are three ways in which materials are strong in different combinations—pulling (tension), pushing (compression), and sliding (shear). Each is very important […] all intact structures have internal forces that balance the external forces acting on them. These external forces come from simple self-weight, people standing, sitting, walking, travelling across them in cars, trucks, and trains, and from the environment such as wind, water, and earthquakes. In that state of equilibrium it turns out that structures are naturally lazy—the energy stored in them is a minimum for that shape or form of structure. Form-finding structures are a special group of buildings that are allowed to find their own shape—subject to certain constraints. There are two classes—in the first, the form-finding process occurs in a model (which may be physical or theoretical) and the structure is scaled up from the model. In the second, the structure is actually built and then allowed to settle into shape. In both cases the structures are self-adjusting in that they move to a position in which the internal forces are in equilibrium and contain minimum energy. […] there is a big problem in using self-adjusting structures in practice. The movements under changing loads can make the structures unfit for purpose. […] Triangles are important in structural engineering because they are the simplest stable form of structure and you see them in all kinds of structures—whether form-finding or not. […] Other forms of pin jointed structure, such as a rectangle, will deform in shear as a mechanism […] unless it has diagonal bracing—making it triangular. […] bending occurs in part of a structure when the forces acting on it tend to make it turn or rotate—but it is constrained or prevented from turning freely by the way it is connected to the rest of the structure or to its foundations. The turning forces may be internal or external.”

“Energy is the capacity of a force to do work. If you stretch an elastic band it has an internal tension force resisting your pull. If you let go of one end the band will recoil and could inflict a sharp sting on your other hand. The internal force has energy or the capacity to do work because you stretched it. Before you let go the energy was potential; after you let go the energy became kinetic. Potential energy is the capacity to do work because of the position of something—in this case because you pulled the two ends of the band apart. […] A car at the top of a hill has the potential energy to roll down the hill if the brakes are released. The potential energy in the elastic band and in a structure has a specific name—it is called ‘strain energy’. Kinetic energy is due to movement, so when you let go of the band […] the potential energy is converted into kinetic energy. Kinetic energy depends on mass and velocity—so a truck can develop more kinetic energy than a small car. When a structure is loaded by a force then the structure moves in whatever way it can to ‘get out of the way’. If it can move freely it will do—just as if you push a car with the handbrake off it will roll forward. However, if the handbrake is on the car will not move, and an internal force will be set up between the point at which you are pushing and the wheels as they grip the road.”

“[A] rope hanging freely as a catenary has minimum energy and […] it can only resist one kind of force—tension. Engineers say that it has one degree of freedom. […] In brief, degrees of freedom are the independent directions in which a structure or any part of a structure can move or deform […] Movements along degrees of freedom define the shape and location of any object at a given time. Each part, each piece of a physical structure whatever its size is a physical object embedded in and connected to other objects […] similar objects which I will call its neighbours. Whatever its size each has the potential to move unless something stops it. Where it may move freely […] then no internal resisting force is created. […] where it is prevented from moving in any direction a reaction force is created with consequential internal forces in the structure. For example at a support to a bridge, where the whole bridge is normally stopped from moving vertically, then an external vertical reaction force develops which must be resisted by a set of internal forces that will depend on the form of the bridge. So inside the bridge structure each piece, however small or large, will move—but not freely. The neighbouring objects will get in the way […]. When this happens internal forces are created as the objects bump up against each other and we represent or model those forces along the pathways which are the local degrees of freedom. The structure has to be strong enough to resist these internal forces along these pathways.”

“The next question is ‘How do we find out how big the forces and movements are?’ It turns out that there is a whole class of structures where this is reasonably straightforward and these are the structures covered in elementary textbooks. Engineers call them ‘statically determinate’ […] For these structures we can find the sizes of the forces just by balancing the internal and external forces to establish equilibrium. […] Unfortunately many real structures can’t be fully explained in this way—they are ‘statically indeterminate‘. This is because whilst establishing equilibrium between internal and external forces is necessary it is not sufficient for finding all of the internal forces. […] The four-legged stool is statically indeterminate. You will begin to understand this if you have ever sat at a fourlegged wobbly table […] which has one leg shorter than the other three legs. There can be no force in that leg because there is no reaction from the ground. What is more, the opposite leg will have no internal force either because otherwise there would be a net turning moment about the line joining the other two legs. Thus the table is balanced on two legs—which is why it wobbles back and forth. […] each leg has one degree of freedom but we have only three ways of balancing them in the (x,y,z) directions. In mathematical terms, we have four unknown  variables (the internal forces) but only three equations (balancing equilibrium in three directions). It follows that there isn’t just one set of forces in equilibrium—indeed, there are many such sets.”

“[W]hen a structure is in equilibrium it has minimum strain energy. […] Strictly speaking, minimum strain energy as a criterion for equilibrium is [however] true only in specific circumstances. To understand this we need to look at the constitutive relations between forces and deformations or displacements. Strain energy is stored potential energy and that energy is the capacity to do work. The strain energy in a body is there because work has been done on it—a force moved through a distance. Hence in order to know the energy we must know how much displacement is caused by a given force. This is called a ‘constitutive relation’ and has the form ‘force equals a constitutive factor times a displacement’. The most common of these relationships is called ‘linear elastic’ where the force equals a simple numerical factor—called the stiffness—times the displacement […] The inverse of the stiffness is called flexibility”.

“Aeroplanes take off or ascend because the lift forces due to the forward motion of the plane exceed the weight […] In level flight or cruise the plane is neutrally buoyant and flies at a steady altitude. […] The structure of an aircraft consists of four sets of tubes: the fuselage, the wings, the tail, and the fin. For obvious reasons their weight needs to be as small as possible. […] Modern aircraft structures are semi-monocoque—meaning stressed skin but with a supporting frame. In other words the skin covering, which may be only a few millimetres thick, becomes part of the structure. […] In an overall sense, the lift and drag forces effectively act on the wings through centres of pressure. The wings also carry the weight of engines and fuel. During a typical flight, the positions of these centres of force vary along the wing—for example as fuel is used. The wings are balanced cantilevers fixed to the fuselage. Longer wings (compared to their width) produce greater lift but are also necessarily heavier—so a compromise is required.”

“When structures move quickly, in particular if they accelerate or decelerate, we have to consider […] the inertia force and the damping force. They occur, for example, as an aeroplane takes off and picks up speed. They occur in bridges and buildings that oscillate in the wind. As these structures move the various bits of the structure remain attached—perhaps vibrating in very complex patterns, but they remain joined together in a state of dynamic equilibrium. An inertia force results from an acceleration or deceleration of an object and is directly proportional to the weight of that object. […] Newton’s 2nd Law tells us that the magnitudes of these [inertial] forces are proportional to the rates of change of momentum. […] Damping arises from friction or ‘looseness’ between components. As a consequence, energy is dissipated into other forms such as heat and sound, and the vibrations get smaller. […] The kinetic energy of a structure in static equilibrium is zero, but as the structure moves its potential energy is converted into kinetic energy. This is because the total energy remains constant by the principle of the conservation of energy (the first law of thermodynamics). The changing forces and displacements along the degree of freedom pathways travel as a wave […]. The amplitude of the wave depends on the nature of the material and the connections between components.”

“For [a] structure to be safe the materials must be strong enough to resist the tension, the compression, and the shear. The strength of materials in tension is reasonably straightforward. We just need to know the limiting forces the material can resist. This is usually specified as a set of stresses. A stress is a force divided by a cross sectional area and represents a localized force over a small area of the material. Typical limiting tensile stresses are called the yield stress […] and the rupture stress—so we just need to know their numerical values from tests. Yield occurs when the material cannot regain its original state, and permanent displacements or strains occur. Rupture is when the material breaks or fractures. […] Limiting average shear stresses and maximum allowable stress are known for various materials. […] Strength in compression is much more difficult […] Modern practice using the finite element method enables us to make theoretical estimates […] but it is still approximate because of the simplifications necessary to do the computer analysis […]. One of the challenges to engineers who rely on finite element analysis is to make sure they understand the implications of the simplifications used.”

“Dynamic loads cause vibrations. One particularly dangerous form of vibration is called resonance […]. All structures have a natural frequency of free vibration. […] Resonance occurs if the frequency of an external vibrating force coincides with the natural frequency of the structure. The consequence is a rapid build up of vibrations that can become seriously damaging. […] Wind is a major source of vibrations. As it flows around a bluff body the air breaks away from the surface and moves in a circular motion like a whirlpool or whirlwind as eddies or vortices. Under certain conditions these vortices may break away on alternate sides, and as they are shed from the body they create pressure differences that cause the body to oscillate. […] a structure is in stable equilibrium when a small perturbation does not result in large displacements. A structure in dynamic equilibrium may oscillate about a stable equilibrium position. […] Flutter is dynamic and a form of wind-excited self-reinforcing oscillation. It occurs, as in the P-delta effect, because of changes in geometry. Forces that are no longer in line because of large displacements tend to modify those displacements of the structure, and these, in turn, modify the forces, and so on. In this process the energy input during a cycle of vibration may be greater than that lost by damping and so the amplitude increases in each cycle until destruction. It is a positive feed-back mechanism that amplifies the initial deformations, causes non-linearity, material plasticity and decreased stiffness, and reduced natural frequency. […] Regular pulsating loads, even very small ones, can cause other problems too through a phenomenon known as fatigue. The word is descriptive—under certain conditions the materials just get tired and crack. A normally ductile material like steel becomes brittle. Fatigue occurs under very small loads repeated many millions of times. All materials in all types of structures have a fatigue limit. […] Fatigue damage occurs deep in the material as microscopic bonds are broken. The problem is particularly acute in the heat affected zones of welded structures.”

“Resilience is the ability of a system to recover quickly from difficult conditions. […] One way of delivering a degree of resilience is to make a structure fail-safe—to mitigate failure if it happens. A household electrical fuse is an everyday example. The fuse does not prevent failure, but it does prevent extreme consequences such as an electrical fire. Damage-tolerance is a similar concept. Damage is any physical harm that reduces the value of something. A damage-tolerant structure is one in which any damage can be accommodated at least for a short time until it can be dealt with. […] human factors in failure are not just a matter of individuals’ slips, lapses, or mistakes but are also the result of organizational and cultural situations which are not easy to identify in advance or even at the time. Indeed, they may only become apparent in hindsight. It follows that another major part of safety is to design a structure so that it can be inspected, repaired, and maintained. Indeed all of the processes of creating a structure, whether conceiving, designing, making, or monitoring performance, have to be designed with sufficient resilience to accommodate unexpected events. In other words, safety is not something a system has (a property), rather it is something a system does (a performance). Providing resilience is a form of control—a way of managing uncertainties and risks.”

Stiffness.
Antoni Gaudí. Heinz Isler. Frei Otto.
Eden Project.
Tensegrity.
Bending moment.
Shear and moment diagram.
Stonehenge.
Pyramid at Meidum.
Vitruvius.
Master builder.
John Smeaton.
Puddling (metallurgy).
Cast iron.
Isambard Kingdom Brunel.
Henry Bessemer. Bessemer process.
Institution of Structural Engineers.
Graphic statics (wiki doesn’t have an article on this topic under this name and there isn’t much here, but it looks like google has a lot if you’re interested).
Constitutive equation.
Deformation (mechanics).
Compatibility (mechanics).
Principle of Minimum Complementary Energy.
Direct stiffness method. Finite element method.
Hogging and sagging.
Centre of buoyancy. Metacentre (fluid mechanics). Angle of attack.
Box girder bridge.
D’Alembert’s principle.
Longeron.
Buckling.
S-n diagram.

April 11, 2018 Posted by | Books, Engineering, Physics | Leave a comment

Networks

I actually think this was a really nice book, considering the format – I gave it four stars on goodreads. One of the things I noticed people didn’t like about it in the reviews is that it ‘jumps’ a bit in terms of topic coverage; it covers a wide variety of applications and analytical settings. I mostly don’t consider this a weakness of the book – even if occasionally it does get a bit excessive – and I can definitely understand the authors’ choice of approach; it’s sort of hard to illustrate the potential the analytical techniques described within this book have if you’re not allowed to talk about all the areas in which they have been – or could be gainfully – applied. A related point is that many people who read the book might be familiar with the application of these tools in specific contexts but have perhaps not thought about the fact that similar methods are applied in many other areas (and they might all of them be a bit annoyed the authors don’t talk more about computer science applications, or foodweb analyses, or infectious disease applications, or perhaps sociometry…). Most of the book is about graph-theory-related stuff, but a very decent amount of the coverage deals with applications, in a broad sense of the word at least, not theory. The discussion of theoretical constructs in the book always felt to me driven to a large degree by their usefulness in specific contexts.

I have covered related topics before here on the blog, also quite recently – e.g. there’s at least some overlap between this book and Holland’s book about complexity theory in the same series (I incidentally think these books probably go well together) – and as I found the book slightly difficult to blog as it was I decided against covering it in as much detail as I sometimes do when covering these texts – this means that I decided to leave out the links I usually include in posts like these.

Below some quotes from the book.

“The network approach focuses all the attention on the global structure of the interactions within a system. The detailed properties of each element on its own are simply ignored. Consequently, systems as different as a computer network, an ecosystem, or a social group are all described by the same tool: a graph, that is, a bare architecture of nodes bounded by connections. […] Representing widely different systems with the same tool can only be done by a high level of abstraction. What is lost in the specific description of the details is gained in the form of universality – that is, thinking about very different systems as if they were different realizations of the same theoretical structure. […] This line of reasoning provides many insights. […] The network approach also sheds light on another important feature: the fact that certain systems that grow without external control are still capable of spontaneously developing an internal order. […] Network models are able to describe in a clear and natural way how self-organization arises in many systems. […] In the study of complex, emergent, and self-organized systems (the modern science of complexity), networks are becoming increasingly important as a universal mathematical framework, especially when massive amounts of data are involved. […] networks are crucial instruments to sort out and organize these data, connecting individuals, products, news, etc. to each other. […] While the network approach eliminates many of the individual features of the phenomenon considered, it still maintains some of its specific features. Namely, it does not alter the size of the system — i.e. the number of its elements — or the pattern of interaction — i.e. the specific set of connections between elements. Such a simplified model is nevertheless enough to capture the properties of the system. […] The network approach [lies] somewhere between the description by individual elements and the description by big groups, bridging the two of them. In a certain sense, networks try to explain how a set of isolated elements are transformed, through a pattern of interactions, into groups and communities.”

“[T]he random graph model is very important because it quantifies the properties of a totally random network. Random graphs can be used as a benchmark, or null case, for any real network. This means that a random graph can be used in comparison to a real-world network, to understand how much chance has shaped the latter, and to what extent other criteria have played a role. The simplest recipe for building a random graph is the following. We take all the possible pair of vertices. For each pair, we toss a coin: if the result is heads, we draw a link; otherwise we pass to the next pair, until all the pairs are finished (this means drawing the link with a probability p = ½, but we may use whatever value of p). […] Nowadays [the random graph model] is a benchmark of comparison for all networks, since any deviations from this model suggests the presence of some kind of structure, order, regularity, and non-randomness in many real-world networks.”

“…in networks, topology is more important than metrics. […] In the network representation, the connections between the elements of a system are much more important than their specific positions in space and their relative distances. The focus on topology is one of its biggest strengths of the network approach, useful whenever topology is more relevant than metrics. […] In social networks, the relevance of topology means that social structure matters. […] Sociology has classified a broad range of possible links between individuals […]. The tendency to have several kinds of relationships in social networks is called multiplexity. But this phenomenon appears in many other networks: for example, two species can be connected by different strategies of predation, two computers by different cables or wireless connections, etc. We can modify a basic graph to take into account this multiplexity, e.g. by attaching specific tags to edges. […] Graph theory [also] allows us to encode in edges more complicated relationships, as when connections are not reciprocal. […] If a direction is attached to the edges, the resulting structure is a directed graph […] In these networks we have both in-degree and out-degree, measuring the number of inbound and outbound links of a node, respectively. […] in most cases, relations display a broad variation or intensity [i.e. they are not binary/dichotomous]. […] Weighted networks may arise, for example, as a result of different frequencies of interactions between individuals or entities.”

“An organism is […] the outcome of several layered networks and not only the deterministic result of the simple sequence of genes. Genomics has been joined by epigenomics, transcriptomics, proteomics, metabolomics, etc., the disciplines that study these layers, in what is commonly called the omics revolution. Networks are at the heart of this revolution. […] The brain is full of networks where various web-like structures provide the integration between specialized areas. In the cerebellum, neurons form modules that are repeated again and again: the interaction between modules is restricted to neighbours, similarly to what happens in a lattice. In other areas of the brain, we find random connections, with a more or less equal probability of connecting local, intermediate, or distant neurons. Finally, the neocortex — the region involved in many of the higher functions of mammals — combines local structures with more random, long-range connections. […] typically, food chains are not isolated, but interwoven in intricate patterns, where a species belongs to several chains at the same time. For example, a specialized species may predate on only one prey […]. If the prey becomes extinct, the population of the specialized species collapses, giving rise to a set of co-extinctions. An even more complicated case is where an omnivore species predates a certain herbivore, and both eat a certain plant. A decrease in the omnivore’s population does not imply that the plant thrives, because the herbivore would benefit from the decrease and consume even more plants. As more species are taken into account, the population dynamics can become more and more complicated. This is why a more appropriate description than ‘foodchains’ for ecosystems is the term foodwebs […]. These are networks in which nodes are species and links represent relations of predation. Links are usually directed (big fishes eat smaller ones, not the other way round). These networks provide the interchange of food, energy, and matter between species, and thus constitute the circulatory system of the biosphere.”

“In the cell, some groups of chemicals interact only with each other and with nothing else. In ecosystems, certain groups of species establish small foodwebs, without any connection to external species. In social systems, certain human groups may be totally separated from others. However, such disconnected groups, or components, are a strikingly small minority. In all networks, almost all the elements of the systems take part in one large connected structure, called a giant connected component. […] In general, the giant connected component includes not less than 90 to 95 per cent of the system in almost all networks. […] In a directed network, the existence of a path from one node to another does not guarantee that the journey can be made in the opposite direction. Wolves eat sheep, and sheep eat grass, but grass does not eat sheep, nor do sheep eat wolves. This restriction creates a complicated architecture within the giant connected component […] according to an estimate made in 1999, more than 90 per cent of the WWW is composed of pages connected to each other, if the direction of edges is ignored. However, if we take direction into account, the proportion of nodes mutually reachable is only 24 per cent, the giant strongly connected component. […] most networks are sparse, i.e. they tend to be quite frugal in connections. Take, for example, the airport network: the personal experience of every frequent traveller shows that direct flights are not that common, and intermediate stops are necessary to reach several destinations; thousands of airports are active, but each city is connected to less than 20 other cities, on average. The same happens in most networks. A measure of this is given by the mean number of connection of their nodes, that is, their average degree.”

“[A] puzzling contradiction — a sparse network can still be very well connected — […] attracted the attention of the Hungarian mathematicians […] Paul Erdős and Alfréd Rényi. They tackled it by producing different realizations of their random graph. In each of them, they changed the density of edges. They started with a very low density: less than one edge per node. It is natural to expect that, as the density increases, more and more nodes will be connected to each other. But what Erdős and Rényi found instead was a quite abrupt transition: several disconnected components coalesced suddenly into a large one, encompassing almost all the nodes. The sudden change happened at one specific critical density: when the average number of links per node (i.e. the average degree) was greater than one, then the giant connected component suddenly appeared. This result implies that networks display a very special kind of economy, intrinsic to their disordered structure: a small number of edges, even randomly distributed between nodes, is enough to generate a large structure that absorbs almost all the elements. […] Social systems seem to be very tightly connected: in a large enough group of strangers, it is not unlikely to find pairs of people with quite short chains of relations connecting them. […] The small-world property consists of the fact that the average distance between any two nodes (measured as the shortest path that connects them) is very small. Given a node in a network […], few nodes are very close to it […] and few are far from it […]: the majority are at the average — and very short — distance. This holds for all networks: starting from one specific node, almost all the nodes are at very few steps from it; the number of nodes within a certain distance increases exponentially fast with the distance. Another way of explaining the same phenomenon […] is the following: even if we add many nodes to a network, the average distance will not increase much; one has to increase the size of a network by several orders of magnitude to notice that the paths to new nodes are (just a little) longer. The small-world property is crucial to many network phenomena. […] The small-world property is something intrinsic to networks. Even the completely random Erdős-Renyi graphs show this feature. By contrast, regular grids do not display it. If the Internet was a chessboard-like lattice, the average distance between two routers would be of the order of 1,000 jumps, and the Net would be much slower [the authors note elsewhere that “The Internet is composed of hundreds of thousands of routers, but just about ten ‘jumps’ are enough to bring an information packet from one of them to any other.”] […] The key ingredient that transforms a structure of connections into a small world is the presence of a little disorder. No real network is an ordered array of elements. On the contrary, there are always connections ‘out of place’. It is precisely thanks to these connections that networks are small worlds. […] Shortcuts are responsible for the small-world property in many […] situations.”

“Body size, IQ, road speed, and other magnitudes have a characteristic scale: that is, an average value that in the large majority of cases is a rough predictor of the actual value that one will find. […] While height is a homogeneous magnitude, the number of social connection[s] is a heterogeneous one. […] A system with this feature is said to be scale-free or scale-invariant, in the sense that it does not have a characteristic scale. This can be rephrased by saying that the individual fluctuations with respect to the average are too large for us to make a correct prediction. […] In general, a network with heterogeneous connectivity has a set of clear hubs. When a graph is small, it is easy to find whether its connectivity is homogeneous or heterogeneous […]. In the first case, all the nodes have more or less the same connectivity, while in the latter it is easy to spot a few hubs. But when the network to be studied is very big […] things are not so easy. […] the distribution of the connectivity of the nodes of the […] network […] is the degree distribution of the graph. […] In homogeneous networks, the degree distribution is a bell curve […] while in heterogeneous networks, it is a power law […]. The power law implies that there are many more hubs (and much more connected) in heterogeneous networks than in homogeneous ones. Moreover, hubs are not isolated exceptions: there is a full hierarchy of nodes, each of them being a hub compared with the less connected ones.”

“Looking at the degree distribution is the best way to check if a network is heterogeneous or not: if the distribution is fat tailed, then the network will have hubs and heterogeneity. A mathematically perfect power law is never found, because this would imply the existence of hubs with an infinite number of connections. […] Nonetheless, a strongly skewed, fat-tailed distribution is a clear signal of heterogeneity, even if it is never a perfect power law. […] While the small-world property is something intrinsic to networked structures, hubs are not present in all kind of networks. For example, power grids usually have very few of them. […] hubs are not present in random networks. A consequence of this is that, while random networks are small worlds, heterogeneous ones are ultra-small worlds. That is, the distance between their vertices is relatively smaller than in their random counterparts. […] Heterogeneity is not equivalent to randomness. On the contrary, it can be the signature of a hidden order, not imposed by a top-down project, but generated by the elements of the system. The presence of this feature in widely different networks suggests that some common underlying mechanism may be at work in many of them. […] the Barabási–Albert model gives an important take-home message. A simple, local behaviour, iterated through many interactions, can give rise to complex structures. This arises without any overall blueprint”.

Homogamy, the tendency of like to marry like, is very strong […] Homogamy is a specific instance of homophily: this consists of a general trend of like to link to like, and is a powerful force in shaping social networks […] assortative mixing [is] a special form of homophily, in which nodes tend to connect with others that are similar to them in the number of connections. By contrast [when] high- and low-degree nodes are more connected to each other [it] is called disassortative mixing. Both cases display a form of correlation in the degrees of neighbouring nodes. When the degrees of neighbours are positively correlated, then the mixing is assortative; when negatively, it is disassortative. […] In random graphs, the neighbours of a given node are chosen completely at random: as a result, there is no clear correlation between the degrees of neighbouring nodes […]. On the contrary, correlations are present in most real-world networks. Although there is no general rule, most natural and technological networks tend to be disassortative, while social networks tend to be assortative. […] Degree assortativity and disassortativity are just an example of the broad range of possible correlations that bias how nodes tie to each other.”

“[N]etworks (neither ordered lattices nor random graphs), can have both large clustering and small average distance at the same time. […] in almost all networks, the clustering of a node depends on the degree of that node. Often, the larger the degree, the smaller the clustering coefficient. Small-degree nodes tend to belong to well-interconnected local communities. Similarly, hubs connect with many nodes that are not directly interconnected. […] Central nodes usually act as bridges or bottlenecks […]. For this reason, centrality is an estimate of the load handled by a node of a network, assuming that most of the traffic passes through the shortest paths (this is not always the case, but it is a good approximation). For the same reason, damaging central nodes […] can impair radically the flow of a network. Depending on the process one wants to study, other definitions of centrality can be introduced. For example, closeness centrality computes the distance of a node to all others, and reach centrality factors in the portion of all nodes that can be reached in one step, two steps, three steps, and so on.”

“Domino effects are not uncommon in foodwebs. Networks in general provide the backdrop for large-scale, sudden, and surprising dynamics. […] most of the real-world networks show a doubled-edged kind of robustness. They are able to function normally even when a large fraction of the network is damaged, but suddenly certain small failures, or targeted attacks, bring them down completely. […] networks are very different from engineered systems. In an airplane, damaging one element is enough to stop the whole machine. In order to make it more resilient, we have to use strategies such as duplicating certain pieces of the plane: this makes it almost 100 per cent safe. In contrast, networks, which are mostly not blueprinted, display a natural resilience to a broad range of errors, but when certain elements fail, they collapse. […] A random graph of the size of most real-world networks is destroyed after the removal of half of the nodes. On the other hand, when the same procedure is performed on a heterogeneous network (either a map of a real network or a scale-free model of a similar size), the giant connected component resists even after removing more than 80 per cent of the nodes, and the distance within it is practically the same as at the beginning. The scene is different when researchers simulate a targeted attack […] In this situation the collapse happens much faster […]. However, now the most vulnerable is the second: while in the homogeneous network it is necessary to remove about one-fifth of its more connected nodes to destroy it, in the heterogeneous one this happens after removing the first few hubs. Highly connected nodes seem to play a crucial role, in both errors and attacks. […] hubs are mainly responsible for the overall cohesion of the graph, and removing a few of them is enough to destroy it.”

“Studies of errors and attacks have shown that hubs keep different parts of a network connected. This implies that they also act as bridges for spreading diseases. Their numerous ties put them in contact with both infected and healthy individuals: so hubs become easily infected, and they infect other nodes easily. […] The vulnerability of heterogeneous networks to epidemics is bad news, but understanding it can provide good ideas for containing diseases. […] if we can immunize just a fraction, it is not a good idea to choose people at random. Most of the times, choosing at random implies selecting individuals with a relatively low number of connections. Even if they block the disease from spreading in their surroundings, hubs will always be there to put it back into circulation. A much better strategy would be to target hubs. Immunizing hubs is like deleting them from the network, and the studies on targeted attacks show that eliminating a small fraction of hubs fragments the network: thus, the disease will be confined to a few isolated components. […] in the epidemic spread of sexually transmitted diseases the timing of the links is crucial. Establishing an unprotected link with a person before they establish an unprotected link with another person who is infected is not the same as doing so afterwards.”

April 3, 2018 Posted by | Biology, Books, Ecology, Engineering, Epidemiology, Genetics, Mathematics, Statistics | Leave a comment

The Internet of Things

 

Some links to stuff he talks about in the lecture:

The Internet of Things: making the most of the Second Digital Revolution – A report by the UK Government Chief Scientific Adviser.
South–North Water Transfer Project.
FDA approves first smart pill that tracks drug regimen compliance from the inside.
The Internet of Things (IoT)* units installed base by category from 2014 to 2020.
Share of the IoT market by sub-sector worldwide in 2017.
San Diego to Cover Half the City with Intelligent Streetlights.
IPv4 and IPv6 (specifically, he talks a little about the IPv4 address space problem).
General Data Protection Regulation (GDPR).
Shodan (website).
Mirai botnet.
Gait analysis.
Website reveals 73,000 unprotected security cameras with default passwords. (This was just an example link – it’s unclear if the site he used to illustrate his point in that part of the lecture was actually Insecam, but he does talk about the widespread use of default passwords and related security implications during the lecture).
Strava’s fitness heatmaps are a ‘potential catastrophe’.
‘Secure by Design’ (a very recently published proposed UK IoT code of practice).

March 26, 2018 Posted by | Computer science, Engineering, Lectures | Leave a comment

The Computer

Below some quotes and links related to the book‘s coverage:

“At the heart of every computer is one or more hardware units known as processors. A processor controls what the computer does. For example, it will process what you type in on your computer’s keyboard, display results on its screen, fetch web pages from the Internet, and carry out calculations such as adding two numbers together. It does this by ‘executing’ a computer program that details what the computer should do […] Data and programs are stored in two storage areas. The first is known as main memory and has the property that whatever is stored there can be retrieved very quickly. Main memory is used for transient data – for example, the result of a calculation which is an intermediate result in a much bigger calculation – and is also used to store computer programs while they are being executed. Data in main memory is transient – it will disappear when the computer is switched off. Hard disk memory, also known as file storage or backing storage, contains data that are required over a period of time. Typical entities that are stored in this memory include files of numerical data, word-processed documents, and spreadsheet tables. Computer programs are also stored here while they are not being executed. […] There are a number of differences between main memory and hard disk memory. The first is the retrieval time. With main memory, an item of data can be retrieved by the processor in fractions of microseconds. With file-based memory, the retrieval time is much greater: of the order of milliseconds. The reason for this is that main memory is silicon-based […] hard disk memory is usually mechanical and is stored on the metallic surface of a disk, with a mechanical arm retrieving the data. […] main memory is more expensive than file-based memory”.

The Internet is a network of computers – strictly, it is a network that joins up a number of networks. It carries out a number of functions. First, it transfers data from one computer to another computer […] The second function of the Internet is to enforce reliability. That is, to ensure that when errors occur then some form of recovery process happens; for example, if an intermediate computer fails then the software of the Internet will discover this and resend any malfunctioning data via other computers. A major component of the Internet is the World Wide Web […] The web […] uses the data-transmission facilities of the Internet in a specific way: to store and distribute web pages. The web consists of a number of computers known as web servers and a very large number of computers known as clients (your home PC is a client). Web servers are usually computers that are more powerful than the PCs that are normally found in homes or those used as office computers. They will be maintained by some enterprise and will contain individual web pages relevant to that enterprise; for example, an online book store such as Amazon will maintain web pages for each item it sells. The program that allows users to access the web is known as a browser. […] A part of the Internet known as the Domain Name System (usually referred to as DNS) will figure out where the page is held and route the request to the web server holding the page. The web server will then send the page back to your browser which will then display it on your computer. Whenever you want another page you would normally click on a link displayed on that page and the process is repeated. Conceptually, what happens is simple. However, it hides a huge amount of detail involving the web discovering where pages are stored, the pages being located, their being sent, the browser reading the pages and interpreting how they should be displayed, and eventually the browser displaying the pages. […] without one particular hardware advance the Internet would be a shadow of itself: this is broadband. This technology has provided communication speeds that we could not have dreamed of 15 years ago. […] Typical broadband speeds range from one megabit per second to 24 megabits per second, the lower rate being about 20 times faster than dial-up rates.”

“A major idea I hope to convey […] is that regarding the computer as just the box that sits on your desk, or as a chunk of silicon that is embedded within some device such as a microwave, is only a partial view. The Internet – or rather broadband access to the Internet – has created a gigantic computer that has unlimited access to both computer power and storage to the point where even applications that we all thought would never migrate from the personal computer are doing just that. […] the Internet functions as a series of computers – or more accurately computer processors – carrying out some task […]. Conceptually, there is little difference between these computers and [a] supercomputer, the only difference is in the details: for a supercomputer the communication between processors is via some internal electronic circuit, while for a collection of computers working together on the Internet the communication is via external circuits used for that network.”

“A computer will consist of a number of electronic circuits. The most important is the processor: this carries out the instructions that are contained in a computer program. […] There are a number of individual circuit elements that make up the computer. Thousands of these elements are combined together to construct the computer processor and other circuits. One basic element is known as an And gate […]. This is an electrical circuit that has two binary inputs A and B and a single binary output X. The output will be one if both the inputs are one and zero otherwise. […] the And gate is only one example – when some action is required, for example adding two numbers together, [the different circuits] interact with each other to carry out that action. In the case of addition, the two binary numbers are processed bit by bit to carry out the addition. […] Whatever actions are taken by a program […] the cycle is the same; an instruction is read into the processor, the processor decodes the instruction, acts on it, and then brings in the next instruction. So, at the heart of a computer is a series of circuits and storage elements that fetch and execute instructions and store data and programs.”

“In essence, a hard disk unit consists of one or more circular metallic disks which can be magnetized. Each disk has a very large number of magnetizable areas which can either represent zero or one depending on the magnetization. The disks are rotated at speed. The unit also contains an arm or a number of arms that can move laterally and which can sense the magnetic patterns on the disk. […] When a processor requires some data that is stored on a hard disk […] then it issues an instruction to find the file. The operating system – the software that controls the computer – will know where the file starts and ends and will send a message to the hard disk to read the data. The arm will move laterally until it is over the start position of the file and when the revolving disk passes under the arm the magnetic pattern that represents the data held in the file is read by it. Accessing data on a hard disk is a mechanical process and usually takes a small number of milliseconds to carry out. Compared with the electronic speeds of the computer itself – normally measured in fractions of a microsecond – this is incredibly slow. Because disk access is slow, systems designers try to minimize the amount of access required to files. One technique that has been particularly effective is known as caching. It is, for example, used in web servers. Such servers store pages that are sent to browsers for display. […] Caching involves placing the frequently accessed pages in some fast storage medium such as flash memory and keeping the remainder on a hard disk.”

“The first computers had a single hardware processor that executed individual instructions. It was not too long before researchers started thinking about computers that had more than one processor. The simple theory here was that if a computer had n processors then it would be n times faster. […] it is worth debunking this notion. If you look at many classes of problems […], you see that a strictly linear increase in performance is not achieved. If a problem that is solved by a single computer is solved in 20 minutes, then you will find a dual processor computer solving it in perhaps 11 minutes. A 3-processor computer may solve it in 9 minutes, and a 4-processor computer in 8 minutes. There is a law of diminishing returns; often, there comes a point when adding a processor slows down the computation. What happens is that each processor needs to communicate with the others, for example passing on the result of a computation; this communicational overhead becomes bigger and bigger as you add processors to the point when it dominates the amount of useful work that is done. The sort of problems where they are effective is where a problem can be split up into sub-problems that can be solved almost independently by each processor with little communication.”

Symmetric encryption methods are very efficient and can be used to scramble large files or long messages being sent from one computer to another. Unfortunately, symmetric techniques suffer from a major problem: if there are a number of individuals involved in a data transfer or in reading a file, each has to know the same key. This makes it a security nightmare. […] public key cryptography removed a major problem associated with symmetric cryptography: that of a large number of keys in existence some of which may be stored in an insecure way. However, a major problem with asymmetric cryptography is the fact that it is very inefficient (about 10,000 times slower than symmetric cryptography): while it can be used for short messages such as email texts, it is far too inefficient for sending gigabytes of data. However, […] when it is combined with symmetric cryptography, asymmetric cryptography provides very strong security. […] One very popular security scheme is known as the Secure Sockets Layer – normally shortened to SSL. It is based on the concept of a one-time pad. […] SSL uses public key cryptography to communicate the randomly generated key between the sender and receiver of a message. This key is only used once for the data interchange that occurs and, hence, is an electronic analogue of a one-time pad. When each of the parties to the interchange has received the key, they encrypt and decrypt the data employing symmetric cryptography, with the generated key carrying out these processes. […] There is an impression amongst the public that the main threats to security and to privacy arise from technological attack. However, the threat from more mundane sources is equally high. Data thefts, damage to software and hardware, and unauthorized access to computer systems can occur in a variety of non-technical ways: by someone finding computer printouts in a waste bin; by a window cleaner using a mobile phone camera to take a picture of a display containing sensitive information; by an office cleaner stealing documents from a desk; by a visitor to a company noting down a password written on a white board; by a disgruntled employee putting a hammer through the main server and the backup server of a company; or by someone dropping an unencrypted memory stick in the street.”

“The basic architecture of the computer has remained unchanged for six decades since IBM developed the first mainframe computers. It consists of a processor that reads software instructions one by one and executes them. Each instruction will result in data being processed, for example by being added together; and data being stored in the main memory of the computer or being stored on some file-storage medium; or being sent to the Internet or to another computer. This is what is known as the von Neumann architecture; it was named after John von Neumann […]. His key idea, which still holds sway today, is that in a computer the data and the program are both stored in the computer’s memory in the same address space. There have been few challenges to the von Neumann architecture.”

[A] ‘neural network‘ […] consists of an input layer that can sense various signals from some environment […]. In the middle (hidden layer), there are a large number of processing elements (neurones) which are arranged into sub-layers. Finally, there is an output layer which provides a result […]. It is in the middle layer that the work is done in a neural computer. What happens is that the network is trained by giving it examples of the trend or item that is to be recognized. What the training does is to strengthen or weaken the connections between the processing elements in the middle layer until, when combined, they produce a strong signal when a new case is presented to them that matches the previously trained examples and a weak signal when an item that does not match the examples is encountered. Neural networks have been implemented in hardware, but most of the implementations have been via software where the middle layer has been implemented in chunks of code that carry out the learning process. […] although the initial impetus was to use ideas in neurobiology to develop neural architectures based on a consideration of processes in the brain, there is little resemblance between the internal data and software now used in commercial implementations and the human brain.”

Links:

Computer.
Byte. Bit.
Moore’s law.
Computer program.
Programming language. High-level programming language. Low-level programming language.
Zombie (computer science).
Therac-25.
Cloud computing.
Instructions per second.
ASCII.
Fetch-execute cycle.
Grace Hopper. Software Bug.
Transistor. Integrated circuit. Very-large-scale integration. Wafer (electronics). Photomask.
Read-only memory (ROM). Read-write memory (RWM). Bus (computing). Address bus. Programmable read-only memory (PROM). Erasable programmable read-only memory (EPROM). Electrically erasable programmable read-only memory (EEPROM). Flash memory. Dynamic random-access memory (DRAM). Static random-access memory (static RAM/SRAM).
Hard disc.
Miniaturization.
Wireless communication.
Radio-frequency identification (RFID).
Metadata.
NP-hardness. Set partition problem. Bin packing problem.
Routing.
Cray X-MP. Beowulf cluster.
Vector processor.
Folding@home.
Denial-of-service attack. Melissa (computer virus). Malware. Firewall (computing). Logic bomb. Fork bomb/rabbit virus. Cryptography. Caesar cipher. Social engineering (information security).
Application programming interface.
Data mining. Machine translation. Machine learning.
Functional programming.
Quantum computing.

March 19, 2018 Posted by | Books, Computer science, Cryptography, Engineering | Leave a comment

Safety-Critical Systems

Some related links to topics covered in the lecture:

Safety-critical system.
Safety engineering.
Fault tree analysis.
Failure mode and effects analysis.
Fail-safe.
Value of a statistical life.
ALARP principle.
Hazards and Risk (HSA).
Software system safety.
Aleatoric and epistemic uncertainty.
N-version programming.
An experimental evaluation of the assumption of independence in multiversion programming (Knight & Leveson).
Safety integrity level.
Software for Dependable Systems – Sufficient Evidence? (consensus study report).

March 15, 2018 Posted by | Computer science, Economics, Engineering, Lectures, Statistics | Leave a comment

The Ice Age (II)

I really liked the book, recommended if you’re at all interested in this kind of stuff. Below some observations from the book’s second half, and some related links:

“Charles MacLaren, writing in 1842, […] argued that the formation of large ice sheets would result in a fall in sea level as water was taken from the oceans and stored frozen on the land. This insight triggered a new branch of ice age research – sea level change. This topic can get rather complicated because as ice sheets grow, global sea level falls. This is known as eustatic sea level change. As ice sheets increase in size, their weight depresses the crust and relative sea level will rise. This is known as isostatic sea level change. […] It is often quite tricky to differentiate between regional-scale isostatic factors and the global-scale eustatic sea level control.”

“By the late 1870s […] glacial geology had become a serious scholarly pursuit with a rapidly growing literature. […] [In the late 1880s] Carvill Lewis […] put forward the radical suggestion that the [sea] shells at Moel Tryfan and other elevated localities (which provided the most important evidence for the great marine submergence of Britain) were not in situ. Building on the earlier suggestions of Thomas Belt (1832–78) and James Croll, he argued that these materials had been dredged from the sea bed by glacial ice and pushed upslope so that ‘they afford no testimony to the former subsidence of the land’. Together, his recognition of terminal moraines and the reworking of marine shells undermined the key pillars of Lyell’s great marine submergence. This was a crucial step in establishing the primacy of glacial ice over icebergs in the deposition of the drift in Britain. […] By the end of the 1880s, it was the glacial dissenters who formed the eccentric minority. […] In the period leading up to World War One, there was [instead] much debate about whether the ice age involved a single phase of ice sheet growth and freezing climate (the monoglacial theory) or several phases of ice sheet build up and decay separated by warm interglacials (the polyglacial theory).”

“As the Earth rotates about its axis travelling through space in its orbit around the Sun, there are three components that change over time in elegant cycles that are entirely predictable. These are known as eccentricity, precession, and obliquity or ‘stretch, wobble, and roll’ […]. These orbital perturbations are caused by the gravitational pull of the other planets in our Solar System, especially Jupiter. Milankovitch calculated how each of these orbital cycles influenced the amount of solar radiation received at different latitudes over time. These are known as Milankovitch Cycles or Croll–Milankovitch Cycles to reflect the important contribution made by both men. […] The shape of the Earth’s orbit around the Sun is not constant. It changes from an almost circular orbit to one that is mildly elliptical (a slightly stretched circle) […]. This orbital eccentricity operates over a 400,000- and 100,000-year cycle. […] Changes in eccentricity have a relatively minor influence on the total amount of solar radiation reaching the Earth, but they are important for the climate system because they modulate the influence of the precession cycle […]. When eccentricity is high, for example, axial precession has a greater impact on seasonality. […] The Earth is currently tilted at an angle of 23.4° to the plane of its orbit around the Sun. Astronomers refer to this axial tilt as obliquity. This angle is not fixed. It rolls back and forth over a 41,000-year cycle from a tilt of 22.1° to 24.5° and back again […]. Even small changes in tilt can modify the strength of the seasons. With a greater angle of tilt, for example, we can have hotter summers and colder winters. […] Cooler, reduced insolation summers are thought to be a key factor in the initiation of ice sheet growth in the middle and high latitudes because they allow more snow to survive the summer melt season. Slightly warmer winters may also favour ice sheet build-up as greater evaporation from a warmer ocean will increase snowfall over the centres of ice sheet growth. […] The Earth’s axis of rotation is not fixed. It wobbles like a spinning top slowing down. This wobble traces a circle on the celestial sphere […]. At present the Earth’s rotational axis points toward Polaris (the current northern pole star) but in 11,000 years it will point towards another star, Vega. This slow circling motion is known as axial precession and it has important impacts on the Earth’s climate by causing the solstices and equinoxes to move around the Earth’s orbit. In other words, the seasons shift over time. Precession operates over a 19,000- and 23,000-year cycle. This cycle is often referred to as the Precession of the Equinoxes.”

The albedo of a surface is a measure of its ability to reflect solar energy. Darker surfaces tend to absorb most of the incoming solar energy and have low albedos. The albedo of the ocean surface in high latitudes is commonly about 10 per cent — in other words, it absorbs 90 per cent of the incoming solar radiation. In contrast, snow, glacial ice, and sea ice have much higher albedos and can reflect between 50 and 90 per cent of incoming solar energy back into the atmosphere. The elevated albedos of bright frozen surfaces are a key feature of the polar radiation budget. Albedo feedback loops are important over a range of spatial and temporal scales. A cooling climate will increase snow cover on land and the extent of sea ice in the oceans. These high albedo surfaces will then reflect more solar radiation to intensify and sustain the cooling trend, resulting in even more snow and sea ice. This positive feedback can play a major role in the expansion of snow and ice cover and in the initiation of a glacial phase. Such positive feedbacks can also work in reverse when a warming phase melts ice and snow to reveal dark and low albedo surfaces such as peaty soil or bedrock.”

“At the end of the Cretaceous, around 65 million years ago (Ma), lush forests thrived in the Polar Regions and ocean temperatures were much warmer than today. This warm phase continued for the next 10 million years, peaking during the Eocene thermal maximum […]. From that time onwards, however, Earth’s climate began a steady cooling that saw the initiation of widespread glacial conditions, first in Antarctica between 40 and 30 Ma, in Greenland between 20 and 15 Ma, and then in the middle latitudes of the northern hemisphere around 2.5 Ma. […] Over the past 55 million years, a succession of processes driven by tectonics combined to cool our planet. It is difficult to isolate their individual contributions or to be sure about the details of cause and effect over this long period, especially when there are uncertainties in dating and when one considers the complexity of the climate system with its web of internal feedbacks.” [Potential causes which have been highlighted include: The uplift of the Himalayas (leading to increased weathering, leading over geological time to an increased amount of CO2 being sequestered in calcium carbonate deposited on the ocean floor, lowering atmospheric CO2 levels), the isolation of Antarctica which created the Antarctic Circumpolar Current (leading to a cooling of Antarctica), the dry-out of the Mediterranean Sea ~5mya (which significantly lowered salt concentrations in the World Ocean, meaning that sea water froze at a higher temperature), and the formation of the Isthmus of Panama. – US].

“[F]or most of the last 1 million years, large ice sheets were present in the middle latitudes of the northern hemisphere and sea levels were lower than today. Indeed, ‘average conditions’ for the Quaternary Period involve much more ice than present. The interglacial peaks — such as the present Holocene interglacial, with its ice volume minima and high sea level — are the exception rather than the norm. The sea level maximum of the Last Interglacial (MIS 5) is higher than today. It also shows that cold glacial stages (c.80,000 years duration) are much longer than interglacials (c.15,000 years). […] Arctic willow […], the northernmost woody plant on Earth, is found in central European pollen records from the last glacial stage. […] For most of the Quaternary deciduous forests have been absent from most of Europe. […] the interglacial forests of temperate Europe that are so familiar to us today are, in fact, rather atypical when we consider the long view of Quaternary time. Furthermore, if the last glacial period is representative of earlier ones, for much of the Quaternary terrestrial ecosystems were continuously adjusting to a shifting climate.”

“Greenland ice cores typically have very clear banding […] that corresponds to individual years of snow accumulation. This is because the snow that falls in summer under the permanent Arctic sun differs in texture to the snow that falls in winter. The distinctive paired layers can be counted like tree rings to produce a finely resolved chronology with annual and even seasonal resolution. […] Ice accumulation is generally much slower in Antarctica, so the ice core record takes us much further back in time. […] As layers of snow become compacted into ice, air bubbles recording the composition of the atmosphere are sealed in discrete layers. This fossil air can be recovered to establish the changing concentration of greenhouse gases such as carbon dioxide (CO2) and methane (CH4). The ice core record therefore allows climate scientists to explore the processes involved in climate variability over very long timescales. […] By sampling each layer of ice and measuring its oxygen isotope composition, Dansgaard produced an annual record of air temperature for the last 100,000 years. […] Perhaps the most startling outcome of this work was the demonstration that global climate could change extremely rapidly. Dansgaard showed that dramatic shifts in mean air temperature (>10°C) had taken place in less than a decade. These findings were greeted with scepticism and there was much debate about the integrity of the Greenland record, but subsequent work from other drilling sites vindicated all of Dansgaard’s findings. […] The ice core records from Greenland reveal a remarkable sequence of abrupt warming and cooling cycles within the last glacial stage. These are known as Dansgaard–Oeschger (D–O) cycles. […] [A] series of D–O cycles between 65,000 and 10,000 years ago [caused] mean annual air temperatures on the Greenland ice sheet [to be] shifted by as much as 10°C. Twenty-five of these rapid warming events have been identified during the last glacial period. This discovery dispelled the long held notion that glacials were lengthy periods of stable and unremitting cold climate. The ice core record shows very clearly that even the glacial climate flipped back and forth. […] D–O cycles commence with a very rapid warming (between 5 and 10°C) over Greenland followed by a steady cooling […] Deglaciations are rapid because positive feedbacks speed up both the warming trend and ice sheet decay. […] The ice core records heralded a new era in climate science: the study of abrupt climate change. Most sedimentary records of ice age climate change yield relatively low resolution information — a thousand years may be packed into a few centimetres of marine or lake sediment. In contrast, ice cores cover every year. They also retain a greater variety of information about the ice age past than any other archive. We can even detect layers of volcanic ash in the ice and pinpoint the date of ancient eruptions.”

“There are strong thermal gradients in both hemispheres because the low latitudes receive the most solar energy and the poles the least. To redress these imbalances the atmosphere and oceans move heat polewards — this is the basis of the climate system. In the North Atlantic a powerful surface current takes warmth from the tropics to higher latitudes: this is the famous Gulf Stream and its northeastern extension the North Atlantic Drift. Two main forces drive this current: the strong southwesterly winds and the return flow of colder, saltier water known as North Atlantic Deep Water (NADW). The surface current loses much of its heat to air masses that give maritime Europe a moist, temperate climate. Evaporative cooling also increases its salinity so that it begins to sink. As the dense and cold water sinks to the deep ocean to form NADW, it exerts a strong pull on the surface currents to maintain the cycle. It returns south at depths >2,000 m. […] The thermohaline circulation in the North Atlantic was periodically interrupted during Heinrich Events when vast discharges of melting icebergs cooled the ocean surface and reduced its salinity. This shut down the formation of NADW and suppressed the Gulf Stream.”

Links:

Archibald Geikie.
Andrew Ramsay (geologist).
Albrecht Penck. Eduard BrücknerGunz glaciation. Mindel glaciation. Riss glaciation. Würm.
Insolation.
Perihelion and aphelion.
Deep Sea Drilling Project.
Foraminifera.
δ18O. Isotope fractionation.
Marine isotope stage.
Cesare Emiliani.
Nicholas Shackleton.
Brunhes–Matuyama reversal. Geomagnetic reversal. Magnetostratigraphy.
Climate: Long range Investigation, Mapping, and Prediction (CLIMAP).
Uranium–thorium dating. Luminescence dating. Optically stimulated luminescence. Cosmogenic isotope dating.
The role of orbital forcing in the Early-Middle Pleistocene Transition (paper).
European Project for Ice Coring in Antarctica (EPICA).
Younger Dryas.
Lake Agassiz.
Greenland ice core project (GRIP).
J Harlen Bretz. Missoula Floods.
Pleistocene megafauna.

February 25, 2018 Posted by | Astronomy, Engineering, Geology, History, Paleontology, Physics | Leave a comment

Lakes (II)

(I have had some computer issues over the last couple of weeks, which was the explanation for my brief blogging hiatus, but they should be resolved by now and as I’m already starting to fall quite a bit behind in terms of my intended coverage of the books I’ve read this year I hope to get rid of some of the backlog in the days to come.)

I have added some more observations from the second half of the book, as well as some related links, below.

“[R]ecycling of old plant material is especially important in lakes, and one way to appreciate its significance is to measure the concentration of CO2, an end product of decomposition, in the surface waters. This value is often above, sometimes well above, the value to be expected from equilibration of this gas with the overlying air, meaning that many lakes are net producers of CO2 and that they emit this greenhouse gas to the atmosphere. How can that be? […] Lakes are not sealed microcosms that function as stand-alone entities; on the contrary, they are embedded in a landscape and are intimately coupled to their terrestrial surroundings. Organic materials are produced within the lake by the phytoplankton, photosynthetic cells that are suspended in the water and that fix CO2, release oxygen (O2), and produce biomass at the base of the aquatic food web. Photosynthesis also takes place by attached algae (the periphyton) and submerged water plants (aquatic macrophytes) that occur at the edge of the lake where enough sunlight reaches the bottom to allow their growth. But additionally, lakes are the downstream recipients of terrestrial runoff from their catchments […]. These continuous inputs include not only water, but also subsidies of plant and soil organic carbon that are washed into the lake via streams, rivers, groundwater, and overland flows. […] The organic carbon entering lakes from the catchment is referred to as ‘allochthonous’, meaning coming from the outside, and it tends to be relatively old […] In contrast, much younger organic carbon is available […] as a result of recent photosynthesis by the phytoplankton and littoral communities; this carbon is called ‘autochthonous’, meaning that it is produced within the lake.”

“It used to be thought that most of the dissolved organic matter (DOM) entering lakes, especially the coloured fraction, was unreactive and that it would transit the lake to ultimately leave unchanged at the outflow. However, many experiments and field observations have shown that this coloured material can be partially broken down by sunlight. These photochemical reactions result in the production of CO2, and also the degradation of some of the organic polymers into smaller organic molecules; these in turn are used by bacteria and decomposed to CO2. […] Most of the bacterial species in lakes are decomposers that convert organic matter into mineral end products […] This sunlight-driven chemistry begins in the rivers, and continues in the surface waters of the lake. Additional chemical and microbial reactions in the soil also break down organic materials and release CO2 into the runoff and ground waters, further contributing to the high concentrations in lake water and its emission to the atmosphere. In algal-rich ‘eutrophic’ lakes there may be sufficient photosynthesis to cause the drawdown of CO2 to concentrations below equilibrium with the air, resulting in the reverse flux of this gas, from the atmosphere into the surface waters.”

“There is a precarious balance in lakes between oxygen gains and losses, despite the seemingly limitless quantities in the overlying atmosphere. This balance can sometimes tip to deficits that send a lake into oxygen bankruptcy, with the O2 mostly or even completely consumed. Waters that have O2 concentrations below 2mg/L are referred to as ‘hypoxic’, and will be avoided by most fish species, while waters in which there is a complete absence of oxygen are called ‘anoxic’ and are mostly the domain for specialized, hardy microbes. […] In many temperate lakes, mixing in spring and again in autumn are the critical periods of re-oxygenation from the overlying atmosphere. In summer, however, the thermocline greatly slows down that oxygen transfer from air to deep water, and in cooler climates, winter ice-cover acts as another barrier to oxygenation. In both of these seasons, the oxygen absorbed into the water during earlier periods of mixing may be rapidly consumed, leading to anoxic conditions. Part of the reason that lakes are continuously on the brink of anoxia is that only limited quantities of oxygen can be stored in water because of its low solubility. The concentration of oxygen in the air is 209 millilitres per litre […], but cold water in equilibrium with the atmosphere contains only 9ml/L […]. This scarcity of oxygen worsens with increasing temperature (from 4°C to 30°C the solubility of oxygen falls by 43 per cent), and it is compounded by faster rates of bacterial decomposition in warmer waters and thus a higher respiratory demand for oxygen.”

“Lake microbiomes play multiple roles in food webs as producers, parasites, and consumers, and as steps into the animal food chain […]. These diverse communities of microbes additionally hold centre stage in the vital recycling of elements within the lake ecosystem […]. These biogeochemical processes are not simply of academic interest; they totally alter the nutritional value, mobility, and even toxicity of elements. For example, sulfate is the most oxidized and also most abundant form of sulfur in natural waters, and it is the ion taken up by phytoplankton and aquatic plants to meet their biochemical needs for this element. These photosynthetic organisms reduce the sulfate to organic sulfur compounds, and once they die and decompose, bacteria convert these compounds to the rotten-egg smelling gas, H2S, which is toxic to most aquatic life. In anoxic waters and sediments, this effect is amplified by bacterial sulfate reducers that directly convert sulfate to H2S. Fortunately another group of bacteria, sulfur oxidizers, can use H2S as a chemical energy source, and in oxygenated waters they convert this reduced sulfur back to its benign, oxidized, sulfate form. […] [The] acid neutralizing capacity (or ‘alkalinity’) varies greatly among lakes. Many lakes in Europe, North America, and Asia have been dangerously shifted towards a low pH because they lacked sufficient carbonate to buffer the continuous input of acid rain that resulted from industrial pollution of the atmosphere. The acid conditions have negative effects on aquatic animals, including by causing a shift in aluminium to its more soluble and toxic form Al3+. Fortunately, these industrial emissions have been regulated and reduced in most of the developed world, although there are still legacy effects of acid rain that have resulted in a long-term depletion of carbonates and associated calcium in certain watersheds.”

“Rotifers, cladocerans, and copepods are all planktonic, that is their distribution is strongly affected by currents and mixing processes in the lake. However, they are also swimmers, and can regulate their depth in the water. For the smallest such as rotifers and copepods, this swimming ability is limited, but the larger zooplankton are able to swim over an impressive depth range during the twenty-four-hour ‘diel’ (i.e. light–dark) cycle. […] the cladocerans in Lake Geneva reside in the thermocline region and deep epilimnion during the day, and swim upwards by about 10m during the night, while cyclopoid copepods swim up by 60m, returning to the deep, dark, cold waters of the profundal zone during the day. Even greater distances up and down the water column are achieved by larger animals. The opossum shrimp, Mysis (up to 25mm in length) lives on the bottom of lakes during the day and in Lake Tahoe it swims hundreds of metres up into the surface waters, although not on moon-lit nights. In Lake Baikal, one of the main zooplankton species is the endemic amphipod, Macrohectopus branickii, which grows up to 38mm in size. It can form dense swarms at 100–200m depth during the day, but the populations then disperse and rise to the upper waters during the night. These nocturnal migrations connect the pelagic surface waters with the profundal zone in lake ecosystems, and are thought to be an adaptation towards avoiding visual predators, especially pelagic fish, during the day, while accessing food in the surface waters under the cover of nightfall. […] Although certain fish species remain within specific zones of the lake, there are others that swim among zones and access multiple habitats. […] This type of fish migration means that the different parts of the lake ecosystem are ecologically connected. For many fish species, moving between habitats extends all the way to the ocean. Anadromous fish migrate out of the lake and swim to the sea each year, and although this movement comes at considerable energetic cost, it has the advantage of access to rich marine food sources, while allowing the young to be raised in the freshwater environment with less exposure to predators. […] With the converse migration pattern, catadromous fish live in freshwater and spawn in the sea.”

“Invasive species that are the most successful and do the most damage once they enter a lake have a number of features in common: fast growth rates, broad tolerances, the capacity to thrive under high population densities, and an ability to disperse and colonize that is enhanced by human activities. Zebra mussels (Dreissena polymorpha) get top marks in each of these categories, and they have proven to be a troublesome invader in many parts of the world. […] A single Zebra mussel can produce up to one million eggs over the course of a spawning season, and these hatch into readily dispersed larvae (‘veligers’), that are free-swimming for up to a month. The adults can achieve densities up to hundreds of thousands per square metre, and their prolific growth within water pipes has been a serious problem for the cooling systems of nuclear and thermal power stations, and for the intake pipes of drinking water plants. A single Zebra mussel can filter a litre a day, and they have the capacity to completely strip the water of bacteria and protists. In Lake Erie, the water clarity doubled and diatoms declined by 80–90 per cent soon after the invasion of Zebra mussels, with a concomitant decline in zooplankton, and potential impacts on planktivorous fish. The invasion of this species can shift a lake from dominance of the pelagic to the benthic food web, but at the expense of native unionid clams on the bottom that can become smothered in Zebra mussels. Their efficient filtering capacity may also cause a regime shift in primary producers, from turbid waters with high concentrations of phytoplankton to a clearer lake ecosystem state in which benthic water plants dominate.”

“One of the many distinguishing features of H2O is its unusually high dielectric constant, meaning that it is a strongly polar solvent with positive and negative charges that can stabilize ions brought into solution. This dielectric property results from the asymmetrical electron cloud over the molecule […] and it gives liquid water the ability to leach minerals from rocks and soils as it passes through the ground, and to maintain these salts in solution, even at high concentrations. Collectively, these dissolved minerals produce the salinity of the water […] Sea water is around 35ppt, and its salinity is mainly due to the positively charged ions sodium (Na+), potassium (K+), magnesium (Mg2+), and calcium (Ca2+), and the negatively charged ions chloride (Cl), sulfate (SO42-), and carbonate CO32-). These solutes, collectively called the ‘major ions’, conduct electrons, and therefore a simple way to track salinity is to measure the electrical conductance of the water between two electrodes set a known distance apart. Lake and ocean scientists now routinely take profiles of salinity and temperature with a CTD: a submersible instrument that records conductance, temperature, and depth many times per second as it is lowered on a rope or wire down the water column. Conductance is measured in Siemens (or microSiemens (µS), given the low salt concentrations in freshwater lakes), and adjusted to a standard temperature of 25°C to give specific conductivity in µS/cm. All freshwater lakes contain dissolved minerals, with specific conductivities in the range 50–500µS/cm, while salt water lakes have values that can exceed sea water (about 50,000µS/cm), and are the habitats for extreme microbes”.

“The World Register of Dams currently lists 58,519 ‘large dams’, defined as those with a dam wall of 15m or higher; these collectively store 16,120km3 of water, equivalent to 213 years of flow of Niagara Falls on the USA–Canada border. […] Around a hundred large dam projects are in advanced planning or construction in Africa […]. More than 300 dams are planned or under construction in the Amazon Basin of South America […]. Reservoirs have a number of distinguishing features relative to natural lakes. First, the shape (‘morphometry’) of their basins is rarely circular or oval, but instead is often dendritic, with a tree-like main stem and branches ramifying out into the submerged river valleys. Second, reservoirs typically have a high catchment area to lake area ratio, again reflecting their riverine origins. For natural lakes, this ratio is relatively low […] These proportionately large catchments mean that reservoirs have short water residence times, and water quality is much better than might be the case in the absence of this rapid flushing. Nonetheless, noxious algal blooms can develop and accumulate in isolated bays and side-arms, and downstream next to the dam itself. Reservoirs typically experience water level fluctuations that are much larger and more rapid than in natural lakes, and this limits the development of littoral plants and animals. Another distinguishing feature of reservoirs is that they often show a longitudinal gradient of conditions. Upstream, the river section contains water that is flowing, turbulent, and well mixed; this then passes through a transition zone into the lake section up to the dam, which is often the deepest part of the lake and may be stratified and clearer due to decantation of land-derived particles. In some reservoirs, the water outflow is situated near the base of the dam within the hypolimnion, and this reduces the extent of oxygen depletion and nutrient build-up, while also providing cool water for fish and other animal communities below the dam. There is increasing attention being given to careful regulation of the timing and magnitude of dam outflows to maintain these downstream ecosystems. […] The downstream effects of dams continue out into the sea, with the retention of sediments and nutrients in the reservoir leaving less available for export to marine food webs. This reduction can also lead to changes in shorelines, with a retreat of the coastal delta and intrusion of seawater because natural erosion processes can no longer be offset by resupply of sediments from upstream.”

“One of the most serious threats facing lakes throughout the world is the proliferation of algae and water plants caused by eutrophication, the overfertilization of waters with nutrients from human activities. […] Nutrient enrichment occurs both from ‘point sources’ of effluent discharged via pipes into the receiving waters, and ‘nonpoint sources’ such the runoff from roads and parking areas, agricultural lands, septic tank drainage fields, and terrain cleared of its nutrient- and water-absorbing vegetation. By the 1970s, even many of the world’s larger lakes had begun to show worrying signs of deterioration from these sources of increasing enrichment. […] A sharp drop in water clarity is often among the first signs of eutrophication, although in forested areas this effect may be masked for many years by the greater absorption of light by the coloured organic materials that are dissolved within the lake water. A drop in oxygen levels in the bottom waters during stratification is another telltale indicator of eutrophication, with the eventual fall to oxygen-free (anoxic) conditions in these lower strata of the lake. However, the most striking impact with greatest effect on ecosystem services is the production of harmful algal blooms (HABs), specifically by cyanobacteria. In eutrophic, temperate latitude waters, four genera of bloom-forming cyanobacteria are the usual offenders […]. These may occur alone or in combination, and although each has its own idiosyncratic size, shape, and lifestyle, they have a number of impressive biological features in common. First and foremost, their cells are typically full of hydrophobic protein cases that exclude water and trap gases. These honeycombs of gas-filled chambers, called ‘gas vesicles’, reduce the density of the cells, allowing them to float up to the surface where there is light available for growth. Put a drop of water from an algal bloom under a microscope and it will be immediately apparent that the individual cells are extremely small, and that the bloom itself is composed of billions of cells per litre of lake water.”

“During the day, the [algal] cells capture sunlight and produce sugars by photosynthesis; this increases their density, eventually to the point where they are heavier than the surrounding water and sink to more nutrient-rich conditions at depth in the water column or at the sediment surface. These sugars are depleted by cellular respiration, and this loss of ballast eventually results in cells becoming less dense than water and floating again towards the surface. This alternation of sinking and floating can result in large fluctuations in surface blooms over the twenty-four-hour cycle. The accumulation of bloom-forming cyanobacteria at the surface gives rise to surface scums that then can be blown into bays and washed up onto beaches. These dense populations of colonies in the water column, and especially at the surface, can shade out bottom-dwelling water plants, as well as greatly reduce the amount of light for other phytoplankton species. The resultant ‘cyanobacterial dominance’ and loss of algal species diversity has negative implications for the aquatic food web […] This negative impact on the food web may be compounded by the final collapse of the bloom and its decomposition, resulting in a major drawdown of oxygen. […] Bloom-forming cyanobacteria are especially troublesome for the management of drinking water supplies. First, there is the overproduction of biomass, which results in a massive load of algal particles that can exceed the filtration capacity of a water treatment plant […]. Second, there is an impact on the taste of the water. […] The third and most serious impact of cyanobacteria is that some of their secondary compounds are highly toxic. […] phosphorus is the key nutrient limiting bloom development, and efforts to preserve and rehabilitate freshwaters should pay specific attention to controlling the input of phosphorus via point and nonpoint discharges to lakes.”

Ultramicrobacteria.
The viral shunt in marine foodwebs.
Proteobacteria. Alphaproteobacteria. Betaproteobacteria. Gammaproteobacteria.
Mixotroph.
Carbon cycle. Nitrogen cycle. AmmonificationAnammox. Comammox.
Methanotroph.
Phosphorus cycle.
Littoral zone. Limnetic zone. Profundal zone. Benthic zone. Benthos.
Phytoplankton. Diatom. Picoeukaryote. Flagellates. Cyanobacteria.
Trophic state (-index).
Amphipoda. Rotifer. Cladocera. Copepod. Daphnia.
Redfield ratio.
δ15N.
Thermistor.
Extremophile. Halophile. Psychrophile. Acidophile.
Caspian Sea. Endorheic basin. Mono Lake.
Alpine lake.
Meromictic lake.
Subglacial lake. Lake Vostock.
Thermus aquaticus. Taq polymerase.
Lake Monoun.
Microcystin. Anatoxin-a.

 

 

February 2, 2018 Posted by | Biology, Books, Botany, Chemistry, Ecology, Engineering, Zoology | Leave a comment

Rivers (II)

Some more observations from the book and related links below.

“By almost every measure, the Amazon is the greatest of all the large rivers. Encompassing more than 7 million square kilometres, its drainage basin is the largest in the world and makes up 5% of the global land surface. The river accounts for nearly one-fifth of all the river water discharged into the oceans. The flow is so great that water from the Amazon can still be identified 125 miles out in the Atlantic […] The Amazon has some 1,100 tributaries, and 7 of these are more than 1,600 kilometres long. […] In the lowlands, most Amazonian rivers have extensive floodplains studded with thousands of shallow lakes. Up to one-quarter of the entire Amazon Basin is periodically flooded, and these lakes become progressively connected with each other as the water level rise.”

“To hydrologists, the term ‘flood’ refers to a river’s annual peak discharge period, whether the water inundates the surrounding landscape or not. In more common parlance, however, a flood is synonymous with the river overflowing it’s banks […] Rivers flood in the natural course of events. This often occurs on the floodplain, as the name implies, but flooding can affect almost all of the length of the river. Extreme weather, particularly heavy or protracted rainfall, is the most frequent cause of flooding. The melting of snow and ice is another common cause. […] River floods are one of the most common natural hazards affecting human society, frequently causing social disruption, material damage, and loss of life. […] Most floods have a seasonal element in their occurence […] It is a general rule that the magnitude of a flood is inversely related to its frequency […] Many of the less predictable causes of flooding occur after a valley has been blocked by a natural dam as a result of a landslide, glacier, or lava flow. Natural dams may cause upstream flooding as the blocked river forms a lake and downstream flooding as a result of failure of the dam.”

“The Tigris-Euphrates, Nile, and Indus are all large, exotic river systems, but in other respects they are quite different. The Nile has a relatively gentle gradient in Egypt and a channel that has experienced only small changes over the last few thousand years, by meander cut-off and a minor shift eastwards. The river usually flooded in a regular and predictable way. The stability and long continuity of the Egyptian civilization may be a reflection of its river’s relative stability. The steeper channel of the Indus, by contrast, has experienced major avulsions over great distances on the lower Indus Plain and some very large floods caused by the failure of glacier ice dams in the Himalayan mountains. Likely explanations for the abandonment of many Harappan cities […] take account of damage caused by major floods and/or the disruption caused by channel avulsion leading to a loss of water supply. Channel avulsion was also a problem for the Sumerian civilization on the alluvial plain called Mesopotamia […] known for the rise and fall of its numerous city states. Most of these cities were situated along the Euphrates River, probably because it was more easily controlled for irrigation purposes than the Tigris, which flowed faster and carried much more water. However, the Euphrates was an anastomosing river with multiple channels that diverge and rejoin. Over time, individual branch channels ceased to flow as others formed, and settlements located on these channels inevitably declined and were abandoned as their water supply ran dry, while others expanded as their channels carried greater amounts of water.”

“During the colonization of the Americas in the mid-18th century and the imperial expansion into Africa and Asia in the late 19th century, rivers were commonly used as boundaries because they were the first, and frequently the only, features mapped by European explorers. The diplomats in Europe who negotiated the allocation of colonial territories claimed by rival powers knew little of the places they were carving up. Often, their limited knowledge was based solely on maps that showed few details, rivers being the only distinct physical features marked. Today, many international river boundaries remain as legacies of those historical decisions based on poor geographical knowledge because states have been reluctant to alter their territorial boundaries from original delimitation agreements. […] no less than three-quarters of the world’s international boundaries follow rivers for at least part of their course. […] approximately 60% of the world’s fresh water is drawn from rivers shared by more than one country.”

“The sediments carried in rivers, laid down over many years, represent a record of the changes that have occurred in the drainage basin through the ages. Analysis of these sediments is one way in which physical geographers can interpret the historical development of landscapes. They can study the physical and chemical characteristics of the sediments itself and/or the biological remains they contain, such as pollen or spores. […] The simple rate at which material is deposited by a river can be a good reflection of how conditions have changed in the drainage basin. […] Pollen from surrounding plants is often found in abundance in fluvial sediments, and the analysis of pollen can yield a great deal of information about past conditions in an area. […] Very long sediment cores taken from lakes and swamps enable us to reconstruct changes in vegetation over very long time periods, in some cases over a million years […] Because climate is a strong determinant of vegetation, pollen analysis has also proved to be an important method for tracing changes in past climates.”

“The energy in flowing and falling water has been harnessed to perform work by turning water-wheels for more than 2,000 years. The moving water turns a large wheel and a shaft connected to the wheel axle transmits the power from the water through a system of gears and cogs to work machinery, such as a millstone to grind corn. […] The early medieval watermill was able to do the work of between 30 and 60 people, and by the end of the 10th century in Europe, waterwheels were commonly used in a wide range of industries, including powering forge hammers, oil and silk mills, sugar-cane crushers, ore-crushing mills, breaking up bark in tanning mills, pounding leather, and grinding stones. Nonetheless, most were still used for grinding grains for preparation into various types of food and drink. The Domesday Book, a survey prepared in England in AD 1086, lists 6,082 watermills, although this is probably a conservative estimate because many mills were not recorded in the far north of the country. By 1300, this number had risen to exceed 10,000. [..] Medieval watermills typically powered their wheels by using a dam or weir to concentrate the falling water and pond a reserve supply. These modifications to rivers became increasingly common all over Europe, and by the end of the Middle Ages, in the mid-15th century, watermills were in use on a huge number of rivers and streams. The importance of water power continued into the Industrial Revolution […]. The early textile factories were built to produce cloth using machines driven by waterwheels, so they were often called mills. […] [Today,] about one-third of all countries rely on hydropower for more than half their electricity. Globally, hydropower provides about 20% of the world’s total electricity supply.”

“Deliberate manipulation of river channels through engineering works, including dam construction, diversion, channelization, and culverting, […] has a long history. […] In Europe today, almost 80% of the total discharge of the continent’s major rivers is affected by measures designed to regulate flow, whether for drinking water supply, hydroelectric power generation, flood control, or any other reason. The proportion in individual countries is higher still. About 90% of rivers in the UK are regulated as a result of these activities, while in the Netherlands this percentage is close to 100. By contrast, some of the largest rivers on other continents, including the Amazon and the Congo, are hardly manipulated at all. […] Direct and intentional modifications to rivers are complemented by the impacts of land use and land use changes which frequently result in the alteration of rivers as an unintended side effect. Deforestation, afforestation, land drainage, agriculture, and the use of fire have all had significant impacts, with perhaps the most extreme effects produced by construction activity and urbanization. […] The major methods employed in river regulation are the construction of large dams […], the building of run-of-river impoundments such as weirs and locks, and by channelization, a term that covers a range of river engineering works including widening, deepening, straightening, and the stabilization of banks. […] Many aspects of a dynamic river channel and its associated ecosystems are mutually adjusting, so a human activity in a landscape that affects the supply of water or sediment is likely to set off a complex cascade of other alterations.”

“The methods of storage (in reservoirs) and distribution (by canal) have not changed fundamentally since the earliest river irrigation schemes, with the exception of some contemporary projects’ use of pumps to distribute water over greater distances. Nevertheless, many irrigation canals still harness the force of gravity. Half the world’s large dams (defined as being 15 metres or higher) were built exclusively or primarily for irrigation, and about one-third of the world’s irrigated cropland relies on reservoir water. In several countries, including such populous nations as India and China, more than 50% of arable land is irrigated by river water supplied from dams. […] Sadly, many irrigation schemes are not well managed and a number of environmental problems are frequently experienced as a result, both on-site and off-site. In many large networks of irrigation canals, less than half of the water diverted from a river or reservoir actually benefits crops. A lot of water seeps away through unlined canals or evaporates before reaching the fields. Some also runs off the fields or infiltrates through the soil, unused by plants, because farmers apply too much water or at the wrong time. Much of this water seeps back into nearby streams or joins underground aquifers, so can be used again, but the quality of water may deteriorate if it picks up salts, fertilizers, or pesticides. Excessive applications of irrigation water often result in rising water tables beneath fields, causing salinization and waterlogging. These processes reduce crop yields on irrigation schemes all over the world.”

“[Deforestation can contribute] to the degradation of aquatic habitats in numerous ways. The loss of trees along river banks can result in changes in the species found in the river because fewer trees means a decline in plant matter and insects falling from them, items eaten by some fish. Fewer trees on river banks also results in less shade. More sunlight reaching the river results in warmer water and the enhanced growth of algae. A change in species can occur as fish that feed on falling food are edged out by those able to feed on algae. Deforestation also typically results in more runoff and more soil erosion. This sediment may cover spawning grounds, leading to lower reproduction rates. […] Grazing and trampling by livestock reduces vegetation cover and causes the compaction of soil, which reduces its infiltration capacity. As rainwater passes over or through the soil in areas of intensive agriculture, it picks up residues from pesticides and fertilizers and transport them to rivers. In this way, agriculture has become a leading source of river pollution in certain parts of the world. Concentration of nitrates and phosphates, derived from fertilizers, have risen notably in many rivers in Europe and North America since the 1950s and have led to a range of […] problems encompassed under the term ‘eutrophication’ – the raising of biological productivity caused by nutrient enrichment. […] In slow-moving rivers […] the growth of algae reduces light penetration and depletes the oxygen in the water, sometimes causing fish kills.”

“One of the most profound ways in which people alter rivers is by damming them. Obstructing a river and controlling its flow in this way brings about a raft of changes. A dam traps sediments and nutrients, alters the river’s temperature and chemistry, and affects the processes of erosion and deposition by which the river sculpts the landscape. Dams create more uniform flow in rivers, usually by reducing peak flows and increasing minimum flows. Since the natural variation in flow is important for river ecosystems and their biodiversity, when dams even out flows the result is commonly fewer fish of fewer species. […] the past 50 years or so has seen a marked escalation in the rate and scale of construction of dams all over the world […]. At the beginning of the 21st century, there were about 800,000 dams worldwide […] In some large river systems, the capacity of dams is sufficient to hold more than the entire annual discharge of the river. […] Globally, the world’s major reservoirs are thought to control about 15% of the runoff from the land. The volume of water trapped worldwide in reservoirs of all sizes is no less than five times the total global annual river flow […] Downstream of a reservoir, the hydrological regime of a river is modified. Discharge, velocity, water quality, and thermal characteristics are all affected, leading to changes in the channel and its landscape, plants, and animals, both on the river itself and in deltas, estuaries, and offshore. By slowing the flow of river water, a dam acts as a trap for sediment and hence reduces loads in the river downstream. As a result, the flow downstream of the dam is highly erosive. A relative lack of silt arriving at a river’s delta can result in more coastal erosion and the intrusion of seawater that brings salt into delta ecosystems. […] The dam-barrier effect on migratory fish and their access to spawning grounds has been recognized in Europe since medieval times.”

“One of the most important effects cities have on rivers is the way in which urbanization affects flood runoff. Large areas of cities are typically impermeable, being covered by concrete, stone, tarmac, and bitumen. This tends to increase the amount of runoff produced in urban areas, an effect exacerbated by networks of storm drains and sewers. This water carries relatively little sediment (again, because soil surfaces have been covered by impermeable materials), so when it reaches a river channel it typically causes erosion and widening. Larger and more frequent floods are another outcome of the increase in runoff generated by urban areas. […] It […] seems very likely that efforts to manage the flood hazard on the Mississippi have contributed to an increased risk of damage from tropical storms on the Gulf of Mexico coast. The levées built along the river have contributed to the loss of coastal wetlands, starving them of sediment and fresh water, thereby reducing their dampening effect on storm surge levels. This probably enhanced the damage from Hurricane Katrina which struck the city of New Orleans in 2005.”

Links:

Onyx River.
Yangtze. Yangtze floods.
Missoula floods.
Murray River.
Ganges.
Thalweg.
Southeastern Anatolia Project.
Water conflict.
Hydropower.
Fulling mill.
Maritime transport.
Danube.
Lock (water navigation).
Hydrometry.
Yellow River.
Aswan High Dam. Warragamba Dam. Three Gorges Dam.
Onchocerciasis.
River restoration.

January 16, 2018 Posted by | Biology, Books, Ecology, Engineering, Geography, Geology, History | Leave a comment

Civil engineering (II)

Some more quotes and links:

“Major earthquakes occur every year in different parts of the world. The various continents that make up the surface of the Earth are moving slowly relative to each other. The rough boundaries between the tectonic plates try to resist this relative motion but eventually the energy stored in the interface (or geological fault) becomes too big to resist and slip occurs, releasing the energy. The energy travels as a wave through the crust of the Earth, shaking the ground as it passes. The speed at which the wave travels depends on the stiffness and density of the material through which it is passing. Topographic effects may concentrate the energy of the shaking. Mexico City sits on the bed of a former lake, surrounded by hills. Once the energy reaches this bowl-like location it becomes trapped and causes much more damage than would be experienced if the city were sitting on a flat plain without the surrounding mountains. Designing a building to withstand earthquake shaking is possible, provided we have some idea about the nature and magnitude and geological origin of the loadings. […] Heavy mud or tile roofs on flimsy timber walls are a disaster – the mass of the roof sways from side to side as it picks up energy from the shaking ground and, in collapsing, flattens the occupants. Provision of some diagonal bracing to prevent the structure from deforming when it is shaken can be straightforward. Shops like to have open spaces for ground floor display areas. There are often post-earthquake pictures of buildings which have lost a storey as this unbraced ground floor structure collapsed. […] Earthquakes in developing countries tend to attract particular coverage. The extent of the damage caused is high because the enforcement of design codes (if they exist) is poor. […] The majority of the damage in Haiti was the result of poor construction and the total lack of any building code requirements.”

“[A]n aircraft is a large structure, and the structural design is subject to the same laws of equilibrium and material behaviour as any structure which is destined never to leave the ground. […] The A380 is an enormous structure, some 25 m high, 73 m long and with a wingspan of about 80 m […]. For comparison, St Paul’s Cathedral in London is 73 m wide at the transept; and the top of the inner dome, visible from inside the cathedral, is about 65 m above the floor of the nave. […] The rules of structural mechanics that govern the design of aircraft structures are no different from those that govern the design of structures that are intended to remain on the ground. In the mid 20th century many aircraft and civil structural engineers would not have recognized any serious intellectual boundary between their activities. The aerodynamic design of an aircraft ensures smooth flow of air over the structure to reduce resistance and provide lift. Bridges in exposed places are not in need of lift but can benefit from reduced resistance to air flow resulting from the use of continuous hollow sections (box girders) rather than trusses to form the deck. The stresses can also flow more smoothly within the box, and the steel be used more efficiently. Testing of potential box girder shapes in wind tunnels helps to check the influence of the presence of the ground or water not far below the deck on the character of the wind flow.”

“Engineering is concerned with finding solutions to problems. The initial problems faced by the engineer relate to the identification of the set of functional criteria which truly govern the design and which will be generated by the client or the promoter of the project. […] The more forcefully the criteria are stated the less freedom the design engineer will have in the search for an appropriate solution. Design is the translation of ideas into achievement. […] The designer starts with (or has access to) a mental store of solutions previously adopted for related problems and then seeks to compromise as necessary in order to find the optimum solution satisfying multiple criteria. The design process will often involve iteration of concept and technology and the investigation of radically different solutions and may also require consultation with the client concerning the possibility of modification of some of the imposed functional criteria if the problem has been too tightly defined. […] The term technology is being used here to represent that knowledge and those techniques which will be necessary in order to realize the concept; recognizing that a concept which has no appreciation of the technologies available for construction may require the development of new technologies in order that it may be realized. Civil engineering design continues through the realization of the project by the constructor or contractor. […] The process of design extends to the eventual assessment of the performance of the completed project as perceived by the client or user (who may not have been party to the original problem definition).”

“An arch or vault curved only in one direction transmits loads by means of forces developed within the thickness of the structure which then push outwards at the boundaries. A shell structure is a generalization of such a vault which is curved in more than one direction. An intact eggshell is very stiff under any loading applied orthogonally (at right angles) to the shell. If the eggshell is broken it becomes very flexible and to stiffen it again restraint is required along the free edge to replace the missing shell. The techniques of prestressing concrete permit the creation of very exciting and daring shell structures with extraordinarily small thickness but the curvatures of the shells and the shapes of the edges dictate the support requirements.”

“In the 19th century it was quicker to travel from Rome to Ancona by sea round the southern tip of the boot of Italy (a distance of at least 2000 km) than to travel overland, a distance of some 200 km as the crow flies. Land-based means of transport require infrastructure that must be planned and constructed and then maintained. Even today water transport is used on a large scale for bulky or heavy items for which speed is not necessary.”

“High speed rail works well (economically) in areas such as Europe and Japan where there is adequate infrastructure in the destination cities for access to and from the railway stations. In parts of the world – such as much of the USA – where the distances are much greater, population densities lower, railway networks much less developed, and local transport in cities much less coordinated (and the motor car has dominated for far longer) the economic case for high speed rail is harder to make. The most successful schemes for high speed rail have involved construction of new routes with dedicated track for the high speed trains with straighter alignments, smoother curves, and gentler gradients than conventional railways – and consequent reduced scope for delays resulting from mixing of high speed and low speed trains on the same track”.

“The Millennium Bridge is a suspension bridge with a very low sag-to-span ratio which lends itself very readily to sideways oscillation. There are plenty of rather bouncy suspension footbridges around the world but the modes of vibration are predominantly those in the plane of the bridge, involving vertical movements. Modes which involve lateral movement and twisting of the deck are always there but being out-of-plane may be overlooked. The more flexible the bridge in any mode of deformation, the more movement there is when people walk across. There is a tendency for people to vary their pace to match the movements of the bridge. Such an involuntary feedback mechanism is guaranteed to lead to resonance of the structure and continued build-up of movements. There will usually be some structural limitation on the magnitude of the oscillations – as the geometry of the bridge changes so the natural frequency will change subtly – but it can still be a bit alarming for the user. […] The Millennium Bridge was stabilized (retrofitted) by the addition of restraining members and additional damping mechanisms to prevent growth of oscillation and to move the natural frequency of this mode of vibration away from the likely frequencies of human footfall. The revised design […] ensured that dynamic response would be acceptable for crowd loading up to two people per square metre. At this density walking becomes difficult so it is seen as a conservative criterion.”

“The development of appropriately safe systems requires that […] parallel control systems should be truly independent so that they are not likely to fail simultaneously. Robustness is thus about ensuring that safety can be maintained even when some elements of the system cease to operate. […] There is a human element in all systems, providing some overall control and an ability to react in critical circumstances. The human intervention is particularly important where all electronic or computer control systems are eliminated and the clock is ticking inexorably towards disaster. Although ultimately whenever a structural failure occurs there is some purely mechanical explanation – some element of the structure was overloaded because some mode of response had been overlooked – there is often a significant human factor which must be considered. We may think that we fully understand the mechanical operation, but may neglect to ensure that the human elements are properly controlled. A requirement for robustness implies both that the damage consequent on the removal of a single element of the structure or system should not be disproportionate (mechanical or structural robustness) but also that the project should not be jeopardized by human failure (organizational robustness). […] A successful civil engineering project is likely to have evident robustness in concept, technology, and realization. A concept which is unclear, a technology in its infancy, and components of realization which lacks coherence will all contribute to potential disaster.”

“Tunnelling inevitably requires removal of ground from the face with a tendency for the ground above and ahead of the tunnel to fall into the gap. The success of the tunnelling operation can be expressed in terms of the volume loss: the proportion of the volume of the tunnel which is unintentionally excavated causing settlement at the ground surface – the smaller this figure the better. […] How can failure of the tunnel be avoided? One route to assurance will be to perform numerical analysis of the tunnel construction process with close simulations of all the stages of excavation and loading of the new structure. Computer analyses are popular because they appear simple to perform, even in three dimensions. However, such analyses can be no more reliable than the models of soil behaviour on which they are based and on the way in which the rugged detail of construction is translated into numerical instructions. […] Whatever one’s confidence in the numerical analysis it will obviously not be a bad idea to observe the tunnel while it is being constructed. Obvious things to observe include tunnel convergence – the change in the cross-section of the tunnel in different directions – and movements at the ground surface and existing buildings over the tunnel. […] observation is not of itself sufficient unless there is some structured strategy for dealing with the observations. At Heathrow […] the data were not interpreted until after the failure had occurred. It was then clear that significant and undesirable movements had been occurring and could have been detected at least two months before the failure.”

“Fatigue is a term used to describe a failure which develops as a result of repeated loading – possibly over many thousands or millions of cycles. […] Fatigue cannot be avoided, and the rate of development of damage may not be easy to predict. It often requires careful techniques of inspection to identify the presence of incipient cracks which may eventually prove structurally devastating.”

“Some projects would clearly be regarded as failures – a dam bursts, a flood protection dyke is overtopped, a building or bridge collapses. In each case there is the possibility of a technical description of the processes leading to the failure – in the end the strength of the material in some location has been exceeded by the demands of the applied loads or the load carrying paths have been disrupted. But failure can also be financial or economic. Such failures are less evident: a project that costs considerably more than the original estimate has in some way failed to meet its expectations. A project that, once built, is quite unable to generate the revenue that was expected in order to justify the original capital outlay has also failed.”

1999 Jiji earthquake.
Taipei 101. Tuned mass damper.
Tacoma Narrows Bridge (1940). Brooklyn Bridge. Golden Gate Bridge.
Sydney Opera House. Jørn Utzon. Ove Arup. Christiani & Nielsen.
Bell Rock Lighthouse. Northern Lighthouse Board. Richard Henry Brunton.
Panama Canal. Culebra Cut. Gatun Lake. Panamax.
Great Western Railway.
Shinkansen. TGV.
Ronan Point.
New Austrian tunnelling method.
Crossrail.
Fukushima Daiichi nuclear disaster.
Turnkey project. Unit price contract.
Colin Buchanan.
Dongtan.

December 21, 2017 Posted by | Books, Economics, Engineering, Geology | Leave a comment

Civil engineering (I)

I have included some quotes from the first half of the book below, and some links related to the book’s coverage:

“Today, the term ‘civil engineering’ distinguishes the engineering of the provision of infrastructure from […] many other branches of engineering that have come into existence. It thus has a somewhat narrower scope now than it had in the 18th and early 19th centuries. There is a tendency to define it by exclusion: civil engineering is not mechanical engineering, not electrical engineering, not aeronautical engineering, not chemical engineering… […] Civil engineering today is seen as encompassing much of the infrastructure of modern society provided it does not move – roads, buildings, dams, tunnels, drains, airports (but not aeroplanes or air traffic control), railways (but not railway engines or signalling), power stations (but not turbines). The fuzzy definition of civil engineering as the engineering of infrastructure […] should make us recognize that there are no precise boundaries and that any practising engineer is likely to have to communicate across whatever boundaries appear to have been created. […] The boundary with science is also fuzzy. Engineering is concerned with the solution of problems now, and cannot necessarily wait for the underlying science to catch up. […] All engineering is concerned with finding solutions to problems for which there is rarely a single answer. Presented with an appropriate ‘solution-neutral problem definition’ the engineer needs to find ways of applying existing or emergent technologies to the solution of the problem.”

“[T]he behaviour of the soil or other materials that make up the ground in its natural state is rather important to engineers. However, although it can be guessed from exploratory probings and from knowledge of the local geological history, the exact nature of the ground can never be discovered before construction begins. By contrast, road embankments are formed of carefully prepared soils; and water-retaining dams may also be constructed from selected soils and rocks – these can be seen as ‘designer soils’. […] Soils are formed of mineral particles packed together with surrounding voids – the particles can never pack perfectly. […] The voids around the soil particles are filled with either air or water or a mixture of the two. In northern climes the ground is saturated with water for much of the time. For deformation of the soil to occur, any change in volume must be accompanied by movement of water through and out of the voids. Clay particles are small, the surrounding voids are small, and movement of water through these voids is slow – the permeability is said to be low. If a new load, such as a bridge deck or a tall building, is to be constructed, the ground will want to react to the new loads. A clayey soil will be unable to react instantly because of the low permeability and, as a result, there will be delayed deformations as the water is squeezed out of the clay ground and the clay slowly consolidates. The consolidation of a thick clay layer may take centuries to approach completion.”

“Rock (or stone) is a good construction material. Evidently there are different types of rock with different strengths and different abilities to resist the decay that is encouraged by sunshine, moisture, and frost, but rocks are generally strong, dimensionally stable materials: they do not shrink or twist with time. We might measure the strength of a type of rock in terms of the height of a column of that rock that will just cause the lowest layer of the rock to crush: on such a scale sandstone would have a strength of about 2 kilometres, good limestone about 4 kilometres. A solid pyramid 150 m high uses quite a small proportion of this available strength. […] Iron has been used for several millennia for elements such as bars and chain links which might be used in conjunction with other structural materials, particularly stone. Stone is very strong when compressed, or pushed, but not so strong in tension: when it is pulled cracks may open up. The provision of iron links between adjacent stone blocks can help to provide some tensile strength. […] Cast iron can be formed into many different shapes and is resistant to rust but is brittle – when it breaks it loses all its strength very suddenly. Wrought iron, a mixture of iron with a low proportion of carbon, is more ductile – it can be stretched without losing all its strength – and can be beaten or rolled (wrought) into simple shapes. Steel is a mixture of iron with a higher proportion of carbon than wrought iron and with other elements […] which provide particular mechanical benefits. Mild steel has a remarkable ductility – a tolerance of being stretched – which results from its chemical composition and which allows it to be rolled into sheets or extruded into chosen shapes without losing its strength and stiffness. There are limits on the ratio of the quantities of carbon and other elements to that of the iron itself in order to maintain these desirable properties for the mixture. […] Steel is very strong and stiff in tension or pulling: steel wire and steel cables are obviously very well suited for hanging loads.”

“As concrete sets, the chemical reactions that turn a sloppy mixture of cement and water and stones into a rock-like solid produce a lot of heat. If a large volume of concrete is poured without any special precautions then, as it cools down, having solidified, it will shrink and crack. The Hoover Dam was built as a series of separate concrete columns of limited dimension through which pipes carrying cooling water were passed in order to control the temperature rise. […] Concrete is mixed as a heavy fluid with no strength until it starts to set. Embedding bars of a material such as steel, which is strong in tension, in the fluid concrete gives some tensile strength. Reinforced concrete is used today for huge amounts of construction throughout the world. When the amount of steel present in the concrete is substantial, additives are used to encourage the fresh concrete to flow through intricate spaces and form a good bond with the steel. For the steel to start to resist tensile loads it has to stretch a little; if the concrete around the steel also stretches it may crack. The concrete has little reliable tensile strength and is intended to protect the steel. The concrete can be used more efficiently if the steel reinforcement, in the form of cables or rods, is tensioned, either before the concrete has set or after the concrete has set but before it starts to carry its eventual live loads. The concrete is forced into compression by the stretched steel. […] Such prestressed concrete gives amazing possibilities for very slender and daring structures […] the concrete must be able to withstand the tension in the steel, whether or not the full working loads are being applied. For an arch bridge made from prestressed concrete, the prestress from the steel cables tries to lift up the concrete and reduce the span whereas the traffic loads on the bridge are trying to push it down and increase the span. The location and amount of the prestress has to be chosen to provide the optimum use of the available strength under all possible load combinations. The pressure vessels used to contain the central reactor of a nuclear power station provide a typical example of the application of prestressed concrete.”

“There are many civil engineering contributions required in the several elements of [a] power station […]. The electricity generation side of a nuclear power station is subject to exactly the same design constraints as any other power station. Pipework leading the steam and water through the plant has to be able to cope with severe temperature variations, rotating machinery requires foundations which not only have to be precisely aligned but also have to be able to tolerate the high frequency vibrations arising from the rotations. Residual small out-of-balance forces, transmitted to the foundation continuously over long periods, could degrade the stiffness of the ground. Every system has its resonant frequency at which applied cyclic loads will tend to be amplified, possibly uncontrollably, unless prevented by the damping properties of the foundation materials. Even if the rotating machinery is being operated well away from any resonant frequency under normal conditions, there will be start-up periods in which the frequency sweeps up from stationery, zero frequency, and so an undesirable resonance may be triggered on the way”.

“The material which we see so often on modern road surfaces, […] asphalt […], was introduced in the early 20th century. Binding together the surface layers of stones with bitumen or tar gave the running surface a better strength. Tar is a viscous material which deforms with time under load; ruts may form, particularly in hot weather. Special treatments can be used for the asphalt to reduce the surface noise made by tyres; porous asphalt can encourage drainage. On the other hand, a running surface that is more resistant to traffic loading can be provided with a concrete slab reinforced with a crisscross steel mesh to maintain its integrity between deliberately inserted construction joints, so that any cracking resulting from seasonal thermal contraction occurs at locations chosen by the engineer rather than randomly across the concrete slab. The initial costs of concrete road surfaces are higher than the asphalt alternatives but the full-life costs may be lower.”

“A good supply of fresh water is one essential element of civilized infrastructure; some control of the waste water from houses and industries is another. The two are, of course, not completely independent since one of the desirable requirements of a source of fresh water is that it should not have been contaminated with waste before it reaches its destination of consumption: hence the preference for long aqueducts or pipelines starting from natural springs, rather than taking water from rivers which were probably already contaminated by upstream conurbations. It is curious how often in history this lesson has had to be relearnt.”

“The object of controlled disposal is the same as for nuclear waste: to contain it and prevent any of the toxic constituents from finding their way into the food chain or into water supplies. Simply to remove everything that could possibly be contaminated and dump it to landfill seems the easy option, particularly if use can be made of abandoned quarries or other holes in the ground. But the quantities involved make this an unsustainable long-term proposition. Cities become surrounded with artificial hills of uncertain composition which are challenging to develop for industrial or residential purposes because decomposing waste often releases gases which may be combustible (and useful) or poisonous; because waste often contains toxic substances which have to be prevented from finding pathways to man either upwards to the air or sideways towards water supplies; because the properties of waste (whether or not decomposed or decomposing) are not easy to determine and probably not particularly desirable from an engineering point of view; and because developers much prefer greenfield sites to sites of uncertainty and contamination.”

“There are regularly more or less serious floods in different parts of the world. Some of these are simply the result of unusually high quantities of rainfall which overload the natural river channels, often exacerbated by changes in land use (such as the felling of areas of forest) which encourage more rapid runoff or impose a man-made canalization of the river (by building on flood plains into which the rising river would previously have been able to spill) […]. Some of the incidents are the result of unusual encroachments by the sea, a consequence of a combination of high tide and adverse wind and weather conditions. The potential for disastrous consequences is of course enhanced when both on-shore and off-shore circumstances combine. […] Folk memory for natural disasters tends to be quite short. If the interval between events is typically greater than, say, 5–10 years people may assume that such events are extraordinary and rare. They may suppose that building on the recently flooded plains will be safe for the foreseeable future.”

Links:

Civil engineering.
École Nationale des Ponts et Chaussées.
Institution of Civil Engineers.
Christopher Wren. John Smeaton. Thomas Telford. William Rankine.
Leaning Tower of Pisa.
Cruck. Trabeated system. Corbel. Voussoir. Flange. I-beam.
Hardwick Hall. Blackfriars Bridge. Forth Bridge. Sydney Harbour Bridge.
Gothic architecture.
Buckling.
Pozzolana. Concrete. Grout.
Gravity dam. Arch dam. Hoover Dam. Malpasset Dam.
Torness Nuclear Power Station.
Plastic. Carbon fiber reinforced polymer.
Roman roads. Via Appia.
Sanitation.
Aqueduct. Pont du Gard.
Charles Yelverton O’Connor. Goldfields Water Supply Scheme.
1854 Broad Street cholera outbreak. John Snow. Great Stink of 1858. Joseph Bazalgette.
Brent Spar.
Clywedog Reservoir.
Acqua alta.
North Sea flood of 1953. Hurricane Katrina.
Delta Works. Oosterscheldekering. Thames Barrier.
Groyne. Breakwater.

December 20, 2017 Posted by | Books, Economics, Engineering, Geology | Leave a comment

Nuclear Power (II)

This is my second and last post about the book. Some more links and quotes below.

“Many of the currently operating reactors were built in the late 1960s and 1970s. With a global hiatus on nuclear reactor construction following the Three Mile Island incident and the Chernobyl disaster, there is a dearth of nuclear power replacement capacity as the present fleet faces decommissioning. Nuclear power stations, like coal-, gas-, and oil-fired stations, produce heat to generate electricity and all require water for cooling. The US Geological Survey estimates that this use of water for cooling power stations accounts for over 3% of all water consumption. Most nuclear power plants are built close to the sea so that the ocean can be used as a heat dump. […] The need for such large quantities of water inhibits the use of nuclear power in arid regions of the world. […] The higher the operating temperature, the greater the water usage. […] [L]arge coal, gas and nuclear plants […] can consume millions of litres per hour”.

“A nuclear reactor is utilizing the strength of the force between nucleons while hydrocarbon burning is relying on the chemical bonding between molecules. Since the nuclear bonding is of the order of a million times stronger than the chemical bonding, the mass of hydrocarbon fuel necessary to produce a given amount of energy is about a million times greater than the equivalent mass of nuclear fuel. Thus, while a coal station might burn millions of tonnes of coal per year, a nuclear station with the same power output might consume a few tonnes.”

“There are a number of reasons why one might wish to reprocess the spent nuclear fuel. These include: to produce plutonium either for nuclear weapons or, increasingly, as a fuel-component for fast reactors; the recycling of all actinides for fast-breeder reactors, closing the nuclear fuel cycle, greatly increasing the energy extracted from natural uranium; the recycling of plutonium in order to produce mixed oxide fuels for thermal reactors; recovering enriched uranium from spent fuel to be recycled through thermal reactors; to extract expensive isotopes which are of value to medicine, agriculture, and industry. An integral part of this process is the management of the radioactive waste. Currently 40% of all nuclear fuel is obtained by reprocessing. […] The La Hague site is the largest reprocessing site in the world, with over half the global capacity at 1,700 tonnes of spent fuel per year. […] The world’s largest user of nuclear power, the USA, currently does not reprocess its fuel and hence produces [large] quantities of radioactive waste. […] The principal reprocessors of radioactive waste are France and the UK. Both countries receive material from other countries and after reprocessing return the raffinate to the country of origin for final disposition.”

“Nearly 45,000 tonnes of uranium are mined annually. More than half comes from the three largest producers, Canada, Kazakhstan, and Australia.”

“The designs of nuclear installations are required to be passed by national nuclear licensing agencies. These include strict safety and security features. The international standard for the integrity of a nuclear power plant is that it would withstand the crash of a Boeing 747 Jumbo Jet without the release of hazardous radiation beyond the site boundary. […] At Fukushima, the design was to current safety standards, taking into account the possibility of a severe earthquake; what had not been allowed for was the simultaneous tsunami strike.”

“The costing of nuclear power is notoriously controversial. Opponents point to the past large investments made in nuclear research and would like to factor this into the cost. There are always arguments about whether or not decommissioning costs and waste-management costs have been properly accounted for. […] which electricity source is most economical will vary from country to country […]. As with all industrial processes, there can be economies of scale. In the USA, and particularly in the UK, these economies of scale were never fully realized. In the UK, while several Magnox and AGR reactors were built, no two were of exactly the same design, resulting in no economies in construction costs, component manufacture, or staff training programmes. The issue is compounded by the high cost of licensing new designs. […] in France, the Regulatory Commission agreed a standard design for all plants and used a safety engineering process similar to that used for licensing aircraft. Public debate was thereafter restricted to local site issues. Economies of scale were achieved.”

“[C]onstruction costs […] are the largest single factor in the cost of nuclear electricity generation. […] Because the raw fuel is such a small fraction of the cost of nuclear power generation, the cost of electricity is not very sensitive to the cost of uranium, unlike the fossil fuels, for which fuel can represent up to 70% of the cost. Operating costs for nuclear plants have fallen dramatically as the French practice of standardization of design has spread. […] Generation III+ reactors are claimed to be half the size and capable of being built in much shorter times than the traditional PWRs. The 2008 contracted capital cost of building new plants containing two AP1000 reactors in the USA is around $10–$14billion, […] There is considerable experience of decommissioning of nuclear plants. In the USA, the cost of decommissioning a power plant is approximately $350 million. […] In France and Sweden, decommissioning costs are estimated to be 10–15% of construction costs and are included in the price charged for electricity. […] The UK has by far the highest estimates for decommissioning which are set at £1 billion per reactor. This exceptionally high figure is in part due to the much larger reactor core associated with graphite moderated piles. […] It is clear that in many countries nuclear-generated electricity is commercially competitive with fossil fuels despite the need to include the cost of capital and all waste disposal and decommissioning (factors that are not normally included for other fuels). […] At the present time, without the market of taxes and grants, electricity generated from renewable sources is generally more expensive than that from nuclear power or fossil fuels. This leaves the question: if nuclear power is so competitive, why is there not a global rush to build new nuclear power stations? The answer lies in the time taken to recoup investments. Investors in a new gas-fired power station can expect to recover their investment within 15 years. Because of the high capital start-up costs, nuclear power stations yield a slower rate of return, even though over the lifetime of the plant the return may be greater.”

“Throughout the 20th century, the population and GDP growth combined to drive the [global] demand for energy to increase at a rate of 4% per annum […]. The most conservative estimate is that the demand for energy will see global energy requirements double between 2000 and 2050. […] The demand for electricity is growing at twice the rate of the demand for energy. […] More than two-thirds of all electricity is generated by burning fossil fuels. […] The most rapidly growing renewable source of electricity generation is wind power […] wind is an intermittent source of electricity. […] The intermittency of wind power leads to [a] problem. The grid management has to supply a steady flow of electricity. Intermittency requires a heavy overhead on grid management, and there are serious concerns about the ability of national grids to cope with more than a 20% contribution from wind power. […] As for the other renewables, solar and geothermal power, significant electricity generation will be restricted to latitudes 40°S to 40°N and regions of suitable geological structures, respectively. Solar power and geothermal power are expected to increase but will remain a small fraction of the total electricity supply. […] In most industrialized nations, the current electricity supply is via a regional, national, or international grid. The electricity is generated in large (~1GW) power stations. This is a highly efficient means of electricity generation and distribution. If the renewable sources of electricity generation are to become significant, then a major restructuring of the distribution infrastructure will be necessary. While local ‘microgeneration’ can have significant benefits for small communities, it is not practical for the large-scale needs of big industrial cities in which most of the world’s population live.”

“Electricity cannot be stored in large quantities. If the installed generating capacity is designed to meet peak demand, there will be periods when the full capacity is not required. In most industrial countries, the average demand is only about one-third of peak consumption.”

Links:

Nuclear reprocessing. La Hague site. Radioactive waste. Yucca Mountain nuclear waste repository.
Bismuth phosphate process.
Nuclear decommissioning.
Uranium mining. Open-pit mining.
Wigner effect (Wigner heating). Windscale fire. Three Mile Island accident. Chernobyl disaster. Fukushima Daiichi nuclear disaster.
Fail-safe (engineering).
Treaty on the Non-Proliferation of Nuclear Weapons.
Economics of nuclear power plants.
Fusion power. Tokamak. ITER. High Power laser Energy Research facility (HiPER).
Properties of plasma.
Klystron.
World energy consumption by fuel source. Renewable energy.

 

December 16, 2017 Posted by | Books, Chemistry, Economics, Engineering, Physics | Leave a comment

Nuclear power (I)

I originally gave the book 2 stars, but after I had finished this post I changed that rating to 3 stars (which was not that surprising; already when I wrote my goodreads review shortly after having read the book I was conflicted about whether or not the book deserved the third star). One thing that kept me from giving the book a higher rating was that I thought that the author did not spend enough time on ‘the basic concepts’, a problem I also highlighted in my goodreads review. I’d fortunately recently covered some of those concepts in other books in the series, so it wasn’t too hard for me to follow what was going on, but as sometimes happens for authors of books in this series, I think the author simply was trying to cover too much stuff. But even so this is a nice introductory text on this topic.

I have added some links and quotes related to the first half or so of the book below. I prepared the link list before I started gathering quotes for my coverage, so there may be more overlap in terms of which topics are covered both in the quotes and the links than there usually is (I normally tend to reserve the links for topics and concepts which are covered in these books that I don’t find it necessary to cover in detail in the text – the links are meant to remind me/indicate which sort of topics are also covered in the book, aside from the topics included in the text coverage).

“According to Einstein’s mass–energy equation, the mass of any composite stable object has to be less than the sum of the masses of the parts; the difference is the binding energy of the object. […] The general features of the binding energies are simply understood as follows. We have seen that the measured radii of nuclei [increase] with the cube root of the mass number A. This is consistent with a structure of close packed nucleons. If each nucleon could only interact with its closest neighbours, the total binding energy would then itself be proportional to the number of nucleons. However, this would be an overestimate because nucleons at the surface of the nucleus would not have a complete set of nearest neighbours with which to interact […]. The binding energy would be reduced by the number of surface nucleons and this would be proportional to the surface area, itself proportional to A2/3. So far we have considered only the attractive short-range nuclear binding. However, the protons carry an electric charge and hence experience an electrical repulsion between each other. The electrical force between two protons is much weaker than the nuclear force at short distances but dominates at larger distances. Furthermore, the total electrical contribution increases with the number of pairs of protons.”

“The main characteristics of the empirical binding energy of nuclei […] can now be explained. For the very light nuclei, all the nucleons are in the surface, the electrical repulsion is negligible, and the binding energy increases as the volume and number of nucleons increases. Next, the surface effects start to slow the rate of growth of the binding energy yielding a region of most stable nuclei near charge number Z = 28 (iron). Finally, the electrical repulsion steadily increases until we reach the most massive stable nucleus (lead-208). Between iron and lead, not only does the binding energy decrease so also do the proton to neutron ratios since the neutrons do not experience the electrical repulsion. […] as the nuclei get heavier the Coulomb repulsion term requires an increasing number of neutrons for stability […] For an explanation of [the] peaks, we must turn to the quantum nature of the problem. […] Filled shells corresponded to particularly stable electronic structures […] In the nuclear case, a shell structure also exists separately for both the neutrons and the protons. […] Closed-shell nuclei are referred to as ‘magic number’ nuclei. […] there is a particular stability for nuclei with equal numbers of protons and neutrons.”

“As we move off the line of stable nuclei, by adding or subtracting neutrons, the isotopes become increasingly less stable indicated by increasing levels of beta radioactivity. Nuclei with a surfeit of neutrons emit an electron, hence converting one of the neutrons into a proton, while isotopes with a neutron deficiency can emit a positron with the conversion of a proton into a neutron. For the heavier nuclei, the neutron to proton ratio can be reduced by emitting an alpha particle. All nuclei heavier than lead are unstable and hence radioactive alpha emitters. […] The fact that almost all the radioactive isotopes heavier than lead follow [a] kind of decay chain and end up as stable isotopes of lead explains this element’s anomalously high natural abundance.”

“When two particles collide, they transfer energy and momentum between themselves. […] If the target is much lighter than the projectile, the projectile sweeps it aside with little loss of energy and momentum. If the target is much heavier than the projectile, the projectile simply bounces off the target with little loss of energy. The maximum transfer of energy occurs when the target and the projectile have the same mass. In trying to slow down the neutrons, we need to pass them through a moderator containing scattering centres of a similar mass. The obvious candidate is hydrogen, in which the single proton of the nucleus is the particle closest in mass to the neutron. At first glance, it would appear that water, with its low cost and high hydrogen content, would be the ideal moderator. There is a problem, however. Slow neutrons can combine with protons to form an isotope of hydrogen, deuterium. This removes neutrons from the chain reaction. To overcome this, the uranium fuel has to be enriched by increasing the proportion of uranium-235; this is expensive and technically difficult. An alternative is to use heavy water, that is, water in which the hydrogen is replaced by deuterium. It is not quite as effective as a moderator but it does not absorb neutrons. Heavy water is more expensive and its production more technically demanding than natural water. Finally, graphite (carbon) has a mass of 12 and hence is less efficient requiring a larger reactor core, but it is inexpensive and easily available.”

“[During the Manhattan Project,] Oak Ridge, Tennessee, was chosen as the facility to develop techniques for uranium enrichment (increasing the relative abundance of uranium-235) […] a giant gaseous diffusion facility was developed. Gaseous uranium hexafluoride was forced through a semi permeable membrane. The lighter isotopes passed through faster and at each pass through the membrane the uranium hexafluoride became more and more enriched. The technology is very energy consuming […]. At its peak, Oak Ridge consumed more electricity than New York and Washington DC combined. Almost one-third of all enriched uranium is still produced by this now obsolete technology. The bulk of enriched uranium today is produced in high-speed centrifuges which require much less energy.”

“In order to sustain a nuclear chain reaction, it is essential to have a critical mass of fissile material. This mass depends upon the fissile fuel being used and the topology of the structure containing it. […] The chain reaction is maintained by the neutrons and many of these leave the surface without contributing to the reaction chain. Surrounding the fissile material with a blanket of neutron reflecting material, such as beryllium metal, will keep the neutrons in play and reduce the critical mass. Partially enriched uranium will have an increased critical mass and natural uranium (0.7% uranium-235) will not go critical at any mass without a moderator to increase the number of slow neutrons which are the dominant fission triggers. The critical mass can also be decreased by compressing the fissile material.”

“It is now more than 50 years since operations of the first civil nuclear reactor began. In the intervening years, several hundred reactors have been operating, in total amounting to nearly 50 million hours of experience. This cumulative experience has led to significant advances in reactor design. Different reactor types are defined by their choice of fuel, moderator, control rods, and coolant systems. The major advances leading to greater efficiency, increased economy, and improved safety are referred to as ‘generations’. […] [F]irst generation reactors […] had the dual purpose to make electricity for public consumption and plutonium for the Cold War stockpiles of nuclear weapons. Many of the features of the design were incorporated to meet the need for plutonium production. These impacted on the electricity-generating cost and efficiency. The most important of these was the use of unenriched uranium due to the lack of large-scale enrichment plants in the UK, and the high uranium-238 content was helpful in the plutonium production but made the electricity generation less efficient.”

PWRs, BWRs, and VVERs are known as LWRs (Light Water Reactors). LWRs dominate the world’s nuclear power programme, with the USA operating 69 PWRs and 35 BWRs; Japan operates 63 LWRs, the bulk of which are BWRs; and France has 59 PWRs. Between them, these three countries generate 56% of the world’s nuclear power. […] In the 1990s, a series of advanced versions of the Generation II and III reactors began to receive certification. These included the ACR (Advanced CANDU Reactor), the EPR (European Pressurized Reactor), and Westinghouse AP1000 and APR1400 reactors (all developments of the PWR) and ESBWR (a development of the BWR). […] The ACR uses slightly enriched uranium and a light water coolant, allowing the core to be halved in size for the same power output. […] It would appear that two of the Generation III+ reactors, the EPR […] and AP1000, are set to dominate the world market for the next 20 years. […] […] the EPR is considerably safer than current reactor designs. […] A major advance is that the generation 3+ reactors produce only about 10 % of waste compared with earlier versions of LWRs. […] China has officially adopted the AP1000 design as a standard for future nuclear plants and has indicated a wish to see 100 nuclear plants under construction or in operation by 2020.”

“All thermal electricity-generating systems are examples of heat engines. A heat engine takes energy from a high-temperature environment to a low-temperature environment and in the process converts some of the energy into mechanical work. […] In general, the efficiency of the thermal cycle increases as the temperature difference between the low-temperature environment and the high-temperature environment increases. In PWRs, and nearly all thermal electricity-generating plants, the efficiency of the thermal cycle is 30–35%. At the much higher operating temperatures of Generation IV reactors, typically 850–10000C, it is hoped to increase this to 45–50%.
During the operation of a thermal nuclear reactor, there can be a build-up of fission products known as reactor poisons. These are materials with a large capacity to absorb neutrons and this can slow down the chain reaction; in extremes, it can lead to a complete close-down. Two important poisons are xenon-135 and samarium-149. […] During steady state operation, […] xenon builds up to an equilibrium level in 40–50 hours when a balance is reached between […] production […] and the burn-up of xenon by neutron capture. If the power of the reactor is increased, the amount of xenon increases to a higher equilibrium and the process is reversed if the power is reduced. If the reactor is shut down the burn-up of xenon ceases, but the build-up of xenon continues from the decay of iodine. Restarting the reactor is impeded by the higher level of xenon poisoning. Hence it is desirable to keep reactors running at full capacity as long as possible and to have the capacity to reload fuel while the reactor is on line. […] Nuclear plants operate at highest efficiency when operated continually close to maximum generating capacity. They are thus ideal for provision of base load. If their output is significantly reduced, then the build-up of reactor poisons can impact on their efficiency.”

Links:

Radioactivity. Alpha decay. Beta decay. Gamma decay. Free neutron decay.
Periodic table.
Rutherford scattering.
Isotope.
Neutrino. Positron. Antineutrino.
Binding energy.
Mass–energy equivalence.
Electron shell.
Decay chain.
Heisenberg uncertainty principle.
Otto Hahn. Lise Meitner. Fritz Strassman. Enrico Fermi. Leo Szilárd. Otto Frisch. Rudolf Peierls.
Uranium 238. Uranium 235. Plutonium.
Nuclear fission.
Chicago Pile 1.
Manhattan Project.
Uranium hexafluoride.
Heavy water.
Nuclear reactor coolant. Control rod.
Critical mass. Nuclear chain reaction.
Magnox reactor. UNGG reactor. CANDU reactor.
ZEEP.
Nuclear reactor classifications (a lot of the distinctions included in this article are also included in the book and described in some detail. The topics included here are also covered extensively).
USS Nautilus.
Nuclear fuel cycle.
Thorium-based nuclear power.
Heat engine. Thermodynamic cycle. Thermal efficiency.
Reactor poisoning. Xenon 135. Samarium 149.
Base load.

December 7, 2017 Posted by | Books, Chemistry, Engineering, Physics | Leave a comment

Radioactivity

A few quotes from the book and some related links below. Here’s my very short goodreads review of the book.

Quotes:

“The main naturally occurring radionuclides of primordial origin are uranium-235, uranium-238, thorium-232, their decay products, and potassium-40. The average abundance of uranium, thorium, and potassium in the terrestrial crust is 2.6 parts per million, 10 parts per million, and 1% respectively. Uranium and thorium produce other radionuclides via neutron- and alpha-induced reactions, particularly deeply underground, where uranium and thorium have a high concentration. […] A weak source of natural radioactivity derives from nuclear reactions of primary and secondary cosmic rays with the atmosphere and the lithosphere, respectively. […] Accretion of extraterrestrial material, intensively exposed to cosmic rays in space, represents a minute contribution to the total inventory of radionuclides in the terrestrial environment. […] Natural radioactivity is [thus] mainly produced by uranium, thorium, and potassium. The total heat content of the Earth, which derives from this radioactivity, is 12.6 × 1024 MJ (one megajoule = 1 million joules), with the crust’s heat content standing at 5.4 × 1021 MJ. For comparison, this is significantly more than the 6.4 × 1013 MJ globally consumed for electricity generation during 2011. This energy is dissipated, either gradually or abruptly, towards the external layers of the planet, but only a small fraction can be utilized. The amount of energy available depends on the Earth’s geological dynamics, which regulates the transfer of heat to the surface of our planet. The total power dissipated by the Earth is 42 TW (one TW = 1 trillion watts): 8 TW from the crust, 32.3 TW from the mantle, 1.7 TW from the core. This amount of power is small compared to the 174,000 TW arriving to the Earth from the Sun.”

“Charged particles such as protons, beta and alpha particles, or heavier ions that bombard human tissue dissipate their energy locally, interacting with the atoms via the electromagnetic force. This interaction ejects electrons from the atoms, creating a track of electron–ion pairs, or ionization track. The energy that ions lose per unit path, as they move through matter, increases with the square of their charge and decreases linearly with their energy […] The energy deposited in the tissues and organs of your body by ionizing radiation is defined absorbed dose and is measured in gray. The dose of one gray corresponds to the energy of one joule deposited in one kilogram of tissue. The biological damage wrought by a given amount of energy deposited depends on the kind of ionizing radiation involved. The equivalent dose, measured in sievert, is the product of the dose and a factor w related to the effective damage induced into the living matter by the deposit of energy by specific rays or particles. For X-rays, gamma rays, and beta particles, a gray corresponds to a sievert; for neutrons, a dose of one gray corresponds to an equivalent dose of 5 to 20 sievert, and the factor w is equal to 5–20 (depending on the neutron energy). For protons and alpha particles, w is equal to 5 and 20, respectively. There is also another weighting factor taking into account the radiosensitivity of different organs and tissues of the body, to evaluate the so-called effective dose. Sometimes the dose is still quoted in rem, the old unit, with 100 rem corresponding to one sievert.”

“Neutrons emitted during fission reactions have a relatively high velocity. When still in Rome, Fermi had discovered that fast neutrons needed to be slowed down to increase the probability of their reaction with uranium. The fission reaction occurs with uranium-235. Uranium-238, the most common isotope of the element, merely absorbs the slow neutrons. Neutrons slow down when they are scattered by nuclei with a similar mass. The process is analogous to the interaction between two billiard balls in a head-on collision, in which the incoming ball stops and transfers all its kinetic energy to the second one. ‘Moderators’, such as graphite and water, can be used to slow neutrons down. […] When Fermi calculated whether a chain reaction could be sustained in a homogeneous mixture of uranium and graphite, he got a negative answer. That was because most neutrons produced by the fission of uranium-235 were absorbed by uranium-238 before inducing further fissions. The right approach, as suggested by Szilárd, was to use separated blocks of uranium and graphite. Fast neutrons produced by the splitting of uranium-235 in the uranium block would slow down, in the graphite block, and then produce fission again in the next uranium block. […] A minimum mass – the critical mass – is required to sustain the chain reaction; furthermore, the material must have a certain geometry. The fissile nuclides, capable of sustaining a chain reaction of nuclear fission with low-energy neutrons, are uranium-235 […], uranium-233, and plutonium-239. The last two don’t occur in nature but can be produced artificially by irradiating with neutrons thorium-232 and uranium-238, respectively – via a reaction called neutron capture. Uranium-238 (99.27%) is fissionable, but not fissile. In a nuclear weapon, the chain reaction occurs very rapidly, releasing the energy in a burst.”

“The basic components of nuclear power reactors, fuel, moderator, and control rods, are the same as in the first system built by Fermi, but the design of today’s reactors includes additional components such as a pressure vessel, containing the reactor core and the moderator, a containment vessel, and redundant and diverse safety systems. Recent technological advances in material developments, electronics, and information technology have further improved their reliability and performance. […] The moderator to slow down fast neutrons is sometimes still the graphite used by Fermi, but water, including ‘heavy water’ – in which the water molecule has a deuterium atom instead of a hydrogen atom – is more widely used. Control rods contain a neutron-absorbing material, such as boron or a combination of indium, silver, and cadmium. To remove the heat generated in the reactor core, a coolant – either a liquid or a gas – is circulating through the reactor core, transferring the heat to a heat exchanger or directly to a turbine. Water can be used as both coolant and moderator. In the case of boiling water reactors (BWRs), the steam is produced in the pressure vessel. In the case of pressurized water reactors (PWRs), the steam generator, which is the secondary side of the heat exchanger, uses the heat produced by the nuclear reactor to make steam for the turbines. The containment vessel is a one-metre-thick concrete and steel structure that shields the reactor.”

“Nuclear energy contributed 2,518 TWh of the world’s electricity in 2011, about 14% of the global supply. As of February 2012, there are 435 nuclear power plants operating in 31 countries worldwide, corresponding to a total installed capacity of 368,267 MW (electrical). There are 63 power plants under construction in 13 countries, with a capacity of 61,032 MW (electrical).”

“Since the first nuclear fusion, more than 60 years ago, many have argued that we need at least 30 years to develop a working fusion reactor, and this figure has stayed the same throughout those years.”

“[I]onizing radiation is […] used to improve many properties of food and other agricultural products. For example, gamma rays and electron beams are used to sterilize seeds, flour, and spices. They can also inhibit sprouting and destroy pathogenic bacteria in meat and fish, increasing the shelf life of food. […] More than 60 countries allow the irradiation of more than 50 kinds of foodstuffs, with 500,000 tons of food irradiated every year. About 200 cobalt-60 sources and more than 10 electron accelerators are dedicated to food irradiation worldwide. […] With the help of radiation, breeders can increase genetic diversity to make the selection process faster. The spontaneous mutation rate (number of mutations per gene, for each generation) is in the range 10-8–10-5. Radiation can increase this mutation rate to 10-5–10-2. […] Long-lived cosmogenic radionuclides provide unique methods to evaluate the ‘age’ of groundwaters, defined as the mean subsurface residence time after the isolation of the water from the atmosphere. […] Scientists can date groundwater more than a million years old, through chlorine-36, produced in the atmosphere by cosmic-ray reactions with argon.”

“Radionuclide imaging was developed in the 1950s using special systems to detect the emitted gamma rays. The gamma-ray detectors, called gamma cameras, use flat crystal planes, coupled to photomultiplier tubes, which send the digitized signals to a computer for image reconstruction. Images show the distribution of the radioactive tracer in the organs and tissues of interest. This method is based on the introduction of low-level radioactive chemicals into the body. […] More than 100 diagnostic tests based on radiopharmaceuticals are used to examine bones and organs such as lungs, intestines, thyroids, kidneys, the liver, and gallbladder. They exploit the fact that our organs preferentially absorb different chemical compounds. […] Many radiopharmaceuticals are based on technetium-99m (an excited state of technetium-99 – the ‘m’ stands for ‘metastable’ […]). This radionuclide is used for the imaging and functional examination of the heart, brain, thyroid, liver, and other organs. Technetium-99m is extracted from molybdenum-99, which has a much longer half-life and is therefore more transportable. It is used in 80% of the procedures, amounting to about 40,000 per day, carried out in nuclear medicine. Other radiopharmaceuticals include short-lived gamma-emitters such as cobalt-57, cobalt-58, gallium-67, indium-111, iodine-123, and thallium-201. […] Methods routinely used in medicine, such as X-ray radiography and CAT, are increasingly used in industrial applications, particularly in non-destructive testing of containers, pipes, and walls, to locate defects in welds and other critical parts of the structure.”

“Today, cancer treatment with radiation is generally based on the use of external radiation beams that can target the tumour in the body. Cancer cells are particularly sensitive to damage by ionizing radiation and their growth can be controlled or, in some cases, stopped. High-energy X-rays produced by a linear accelerator […] are used in most cancer therapy centres, replacing the gamma rays produced from cobalt-60. The LINAC produces photons of variable energy bombarding a target with a beam of electrons accelerated by microwaves. The beam of photons can be modified to conform to the shape of the tumour, which is irradiated from different angles. The main problem with X-rays and gamma rays is that the dose they deposit in the human tissue decreases exponentially with depth. A considerable fraction of the dose is delivered to the surrounding tissues before the radiation hits the tumour, increasing the risk of secondary tumours. Hence, deep-seated tumours must be bombarded from many directions to receive the right dose, while minimizing the unwanted dose to the healthy tissues. […] The problem of delivering the needed dose to a deep tumour with high precision can be solved using collimated beams of high-energy ions, such as protons and carbon. […] Contrary to X-rays and gamma rays, all ions of a given energy have a certain range, delivering most of the dose after they have slowed down, just before stopping. The ion energy can be tuned to deliver most of the dose to the tumour, minimizing the impact on healthy tissues. The ion beam, which does not broaden during the penetration, can follow the shape of the tumour with millimetre precision. Ions with higher atomic number, such as carbon, have a stronger biological effect on the tumour cells, so the dose can be reduced. Ion therapy facilities are [however] still very expensive – in the range of hundreds of millions of pounds – and difficult to operate.”

“About 50 million years ago, a global cooling trend took our planet from the tropical conditions at the beginning of the Tertiary to the ice ages of the Quaternary, when the Arctic ice cap developed. The temperature decrease was accompanied by a decrease in atmospheric CO2 from 2,000 to 300 parts per million. The cooling was probably caused by a reduced greenhouse effect and also by changes in ocean circulation due to plate tectonics. The drop in temperature was not constant as there were some brief periods of sudden warming. Ocean deep-water temperatures dropped from 12°C, 50 million years ago, to 6°C, 30 million years ago, according to archives in deep-sea sediments (today, deep-sea waters are about 2°C). […] During the last 2 million years, the mean duration of the glacial periods was about 26,000 years, while that of the warm periods – interglacials – was about 27,000 years. Between 2.6 and 1.1 million years ago, a full cycle of glacial advance and retreat lasted about 41,000 years. During the past 1.2 million years, this cycle has lasted 100,000 years. Stable and radioactive isotopes play a crucial role in the reconstruction of the climatic history of our planet”.

Links:

CUORE (Cryogenic Underground Observatory for Rare Events).
Borexino.
Lawrence Livermore National Laboratory.
Marie Curie. Pierre Curie. Henri Becquerel. Wilhelm Röntgen. Joseph Thomson. Ernest Rutherford. Hans Geiger. Ernest Marsden. Niels Bohr.
Ruhmkorff coil.
Electroscope.
Pitchblende (uraninite).
Mache.
Polonium. Becquerel.
Radium.
Alpha decay. Beta decay. Gamma radiation.
Plum pudding model.
Spinthariscope.
Robert Boyle. John Dalton. Dmitri Mendeleev. Frederick Soddy. James Chadwick. Enrico Fermi. Lise Meitner. Otto Frisch.
Periodic Table.
Exponential decay. Decay chain.
Positron.
Particle accelerator. Cockcroft-Walton generator. Van de Graaff generator.
Barn (unit).
Nuclear fission.
Manhattan Project.
Chernobyl disaster. Fukushima Daiichi nuclear disaster.
Electron volt.
Thermoluminescent dosimeter.
Silicon diode detector.
Enhanced geothermal system.
Chicago Pile Number 1. Experimental Breeder Reactor 1. Obninsk Nuclear Power Plant.
Natural nuclear fission reactor.
Gas-cooled reactor.
Generation I reactors. Generation II reactor. Generation III reactor. Generation IV reactor.
Nuclear fuel cycle.
Accelerator-driven subcritical reactor.
Thorium-based nuclear power.
Small, sealed, transportable, autonomous reactor.
Fusion power. P-p (proton-proton) chain reaction. CNO cycle. Tokamak. ITER (International Thermonuclear Experimental Reactor).
Sterile insect technique.
Phase-contrast X-ray imaging. Computed tomography (CT). SPECT (Single-photon emission computed tomography). PET (positron emission tomography).
Boron neutron capture therapy.
Radiocarbon dating. Bomb pulse.
Radioactive tracer.
Radithor. The Radiendocrinator.
Radioisotope heater unit. Radioisotope thermoelectric generator. Seebeck effect.
Accelerator mass spectrometry.
Atomic bombings of Hiroshima and Nagasaki. Treaty on the Non-Proliferation of Nuclear Weapons. IAEA.
Nuclear terrorism.
Swiss light source. Synchrotron.
Chronology of the universe. Stellar evolution. S-process. R-process. Red giant. Supernova. White dwarf.
Victor Hess. Domenico Pacini. Cosmic ray.
Allende meteorite.
Age of the Earth. History of Earth. Geomagnetic reversal. Uranium-lead dating. Clair Cameron Patterson.
Glacials and interglacials.
Taung child. Lucy. Ardi. Ardipithecus kadabba. Acheulean tools. Java Man. Ötzi.
Argon-argon dating. Fission track dating.

November 28, 2017 Posted by | Archaeology, Astronomy, Biology, Books, Cancer/oncology, Chemistry, Engineering, Geology, History, Medicine, Physics | Leave a comment

Materials… (II)

Some more quotes and links:

“Whether materials are stiff and strong, or hard or weak, is the territory of mechanics. […] the 19th century continuum theory of linear elasticity is still the basis of much of modern solid mechanics. A stiff material is one which does not deform much when a force acts on it. Stiffness is quite distinct from strength. A material may be stiff but weak, like a piece of dry spaghetti. If you pull it, it stretches only slightly […], but as you ramp up the force it soon breaks. To put this on a more scientific footing, so that we can compare different materials, we might devise a test in which we apply a force to stretch a bar of material and measure the increase in length. The fractional change in length is the strain; and the applied force divided by the cross-sectional area of the bar is the stress. To check that it is Hookean, we double the force and confirm that the strain has also doubled. To check that it is truly elastic, we remove the force and check that the bar returns to the same length that it started with. […] then we calculate the ratio of the stress to the strain. This ratio is the Young’s modulus of the material, a quantity which measures its stiffness. […] While we are measuring the change in length of the bar, we might also see if there is a change in its width. It is not unreasonable to think that as the bar stretches it also becomes narrower. The Poisson’s ratio of the material is defined as the ratio of the transverse strain to the longitudinal strain (without the minus sign).”

“There was much argument between Cauchy and Lamé and others about whether there are two stiffness moduli or one. […] In fact, there are two stiffness moduli. One describes the resistance of a material to shearing and the other to compression. The shear modulus is the stiffness in distortion, for example in twisting. It captures the resistance of a material to changes of shape, with no accompanying change of volume. The compression modulus (usually called the bulk modulus) expresses the resistance to changes of volume (but not shape). This is what occurs as a cube of material is lowered deep into the sea, and is squeezed on all faces by the water pressure. The Young’s modulus [is] a combination of the more fundamental shear and bulk moduli, since stretching in one direction produces changes in both shape and volume. […] A factor of about 10,000 covers the useful range of Young’s modulus in engineering materials. The stiffness can be traced back to the forces acting between atoms and molecules in the solid state […]. Materials like diamond or tungsten with strong bonds are stiff in the bulk, while polymer materials with weak intermolecular forces have low stiffness.”

“In pure compression, the concept of ‘strength’ has no meaning, since the material cannot fail or rupture. But materials can and do fail in tension or in shear. To judge how strong a material is we can go back for example to the simple tension arrangement we used for measuring stiffness, but this time make it into a torture test in which the specimen is put on the rack. […] We find […] that we reach a strain at which the material stops being elastic and is permanently stretched. We have reached the yield point, and beyond this we have damaged the material but it has not failed. After further yielding, the bar may fail by fracture […]. On the other hand, with a bar of cast iron, there comes a point where the bar breaks, noisily and without warning, and without yield. This is a failure by brittle fracture. The stress at which it breaks is the tensile strength of the material. For the ductile material, the stress at which plastic deformation starts is the tensile yield stress. Both are measures of strength. It is in metals that yield and plasticity are of the greatest significance and value. In working components, yield provides a safety margin between small-strain elasticity and catastrophic rupture. […] plastic deformation is [also] exploited in making things from metals like steel and aluminium. […] A useful feature of plastic deformation in metals is that plastic straining raises the yield stress, particularly at lower temperatures.”

“Brittle failure is not only noisy but often scary. Engineers keep well away from it. An elaborate theory of fracture mechanics has been built up to help them avoid it, and there are tough materials to hand which do not easily crack. […] Since small cracks and flaws are present in almost any engineering component […], the trick is not to avoid cracks but to avoid long cracks which exceed [a] critical length. […] In materials which can yield, the tip stress can be relieved by plastic deformation, and this is a potent toughening mechanism in some materials. […] The trick of compressing a material to suppress cracking is a powerful way to toughen materials.”

“Hardness is a property which materials scientists think of in a particular and practical way. It tells us how well a material resists being damaged or deformed by a sharp object. That is useful information and it can be obtained easily. […] Soft is sometimes the opposite of hard […] But a different kind of soft is squidgy. […] In the soft box, we find many everyday materials […]. Some soft materials such as adhesives and lubricants are of great importance in engineering. For all of them, the model of a stiff crystal lattice provides no guidance. There is usually no crystal. The units are polymer chains, or small droplets of liquids, or small solid particles, with weak forces acting between them, and little structural organization. Structures when they exist are fragile. Soft materials deform easily when forces act on them […]. They sit as a rule somewhere between rigid solids and simple liquids. Their mechanical behaviour is dominated by various kinds of plasticity.”

“In pure metals, the resistivity is extremely low […] and a factor of ten covers all of them. […] the low resistivity (or, put another way, the high conductivity) arises from the existence of a conduction band in the solid which is only partly filled. Electrons in the conduction band are mobile and drift in an applied electric field. This is the electric current. The electrons are subject to some scattering from lattice vibrations which impedes their motion and generates an intrinsic resistance. Scattering becomes more severe as the temperature rises and the amplitude of the lattice vibrations becomes greater, so that the resistivity of metals increases with temperature. Scattering is further increased by microstructural heterogeneities, such as grain boundaries, lattice distortions, and other defects, and by phases of different composition. So alloys have appreciably higher resistivities than their pure parent metals. Adding 5 per cent nickel to iron doubles the resistivity, although the resistivities of the two pure metals are similar. […] Resistivity depends fundamentally on band structure. […] Plastics and rubbers […] are usually insulators. […] Electronically conducting plastics would have many uses, and some materials [e.g. this one] are now known. […] The electrical resistivity of many metals falls to exactly zero as they are cooled to very low temperatures. The critical temperature at which this happens varies, but for pure metallic elements it always lies below 10 K. For a few alloys, it is a little higher. […] Superconducting windings provide stable and powerful magnetic fields for magnetic resonance imaging, and many industrial and scientific uses.”

“A permanent magnet requires no power. Its magnetization has its origin in the motion of electrons in atoms and ions in the solid, but only a few materials have the favourable combination of quantum properties to give rise to useful ferromagnetism. […] Ferromagnetism disappears completely above the so-called Curie temperature. […] Below the Curie temperature, ferromagnetic alignment throughout the material can be established by imposing an external polarizing field to create a net magnetization. In this way a practical permanent magnet is made. The ideal permanent magnet has an intense magnetization (a strong field) which remains after the polarizing field is switched off. It can only be demagnetized by applying a strong polarizing field in the opposite direction: the size of this field is the coercivity of the magnet material. For a permanent magnet, it should be as high as possible. […] Permanent magnets are ubiquitous but more or less invisible components of umpteen devices. There are a hundred or so in every home […]. There are also important uses for ‘soft’ magnetic materials, in devices where we want the ferromagnetism to be temporary, not permanent. Soft magnets lose their magnetization after the polarizing field is removed […] They have low coercivity, approaching zero. When used in a transformer, such a soft ferromagnetic material links the input and output coils by magnetic induction. Ideally, the magnetization should reverse during every cycle of the alternating current to minimize energy losses and heating. […] Silicon transformer steels yielded large gains in efficiency in electrical power distribution when they were first introduced in the 1920s, and they remain pre-eminent.”

“At least 50 families of plastics are produced commercially today. […] These materials all consist of linear string molecules, most with simple carbon backbones, a few with carbon-oxygen backbones […] Plastics as a group are valuable because they are lightweight and work well in wet environments, and don’t go rusty. They are mostly unaffected by acids and salts. But they burn, and they don’t much like sunlight as the ultraviolet light can break the polymer backbone. Most commercial plastics are mixed with substances which make it harder for them to catch fire and which filter out the ultraviolet light. Above all, plastics are used because they can be formed and shaped so easily. The string molecule itself is held together by strong chemical bonds and is resilient, but the forces between the molecules are weak. So plastics melt at low temperatures to produce rather viscous liquids […]. And with modest heat and a little pressure, they can be injected into moulds to produce articles of almost any shape”.

“The downward cascade of high purity to adulterated materials in recycling is a kind of entropy effect: unmixing is thermodynamically hard work. But there is an energy-driven problem too. Most materials are thermodynamically unstable (or metastable) in their working environments and tend to revert to the substances from which they were made. This is well-known in the case of metals, and is the usual meaning of corrosion. The metals are more stable when combined with oxygen than uncombined. […] Broadly speaking, ceramic materials are more stable thermodynamically, since they already contain much oxygen in chemical combination. Even so, ceramics used in the open usually fall victim to some environmental predator. Often it is water that causes damage. Water steals sodium and potassium from glass surfaces by slow leaching. The surface shrinks and cracks, so the glass loses its transparency. […] Stones and bricks may succumb to the stresses of repeated freezing when wet; limestones decay also by the chemical action of sulfur and nitrogen gasses in polluted rainwater. Even buried archaeological pots slowly react with water in a remorseless process similar to that of rock weathering.”

Ashby plot.
Alan Arnold Griffith.
Creep (deformation).
Amontons’ laws of friction.
Viscoelasticity.
Internal friction.
Surfactant.
Dispersant.
Rheology.
Liquid helium.
Conductor. Insulator. Semiconductor. P-type -ll-. N-type -ll-.
Hall–Héroult process.
Cuprate.
Magnetostriction.
Snell’s law.
Chromatic aberration.
Dispersion (optics).
Dye.
Density functional theory.
Glass.
Pilkington float process.
Superalloy.
Ziegler–Natta catalyst.
Transistor.
Integrated circuit.
Negative-index metamaterial.
Auxetics.
Titanium dioxide.
Hyperfine structure (/-interactions).
Diamond anvil cell.
Synthetic rubber.
Simon–Ehrlich wager.
Sankey diagram.

November 16, 2017 Posted by | Books, Chemistry, Engineering, Physics | Leave a comment