Econstudentlog

Robotics

“This book is not about the psychology or cultural anthropology of robotics, interesting as those are. I am an engineer and roboticist, so I confine myself firmly to the technology and application of real physical robots. […] robotics is the study of the design, application, and use of robots, and that is precisely what this Very Short Introduction is about: what robots do and what roboticists do.”

The above quote is from the book‘s preface; the book is quite decent and occasionally really quite fascinating. Below I have added some sample quotes and links to topics/stuff covered in the book.

“Some of all of […] five functions – sensing, signalling, moving, intelligence, and energy, integrated into a body – are present in all robots. The actual sensors, motors, and behaviours designed into a particular robot body shape depend on the job that robot is designed to do. […] A robot is: 1. an artificial device that can sense its environment and purposefully act on or in that environment; 2. an embodied artificial intelligence; or 3. a machine that can autonomously carry out useful work. […] Many real-world robots […] are not autonomous but remotely operated by humans. […] These are also known as tele-operated robots. […] From a robot design point of view, the huge advantage of tele-operated robots is that the human in the loop provides the robot’s ‘intelligence’. One of the most difficult problems in robotics — the design of the robot’s artificial intelligence — is therefore solved, so it’s not surprising that so many real-world robots are tele-operated. The fact that tele-operated robots alleviate the problem of AI design should not fool us into making the mistake of thinking that tele-operated robots are not sophisticated — they are. […] counter-intuitively, autonomous robots are often simpler than tele-operated robots […] When roboticists talk about autonomous robots they normally mean robots that decide what to do next entirely without human intervention or control. We need to be careful here because they are not talking about true autonomy, in the sense that you or I would regard ourselves as self-determining individuals, but what I would call ‘control autonomy’. By control autonomy I mean that the robot can undertake its task, or mission, without human intervention, but that mission is still programmed or commanded by a human. In fact, there are very few robots in use in the real world that are autonomous even in this limited sense. […] It is helpful to think about a spectrum of robot autonomy, from remotely operated at one end (no autonomy) to fully autonomous at the other. We can then place robots on this spectrum according to their degree of autonomy. […] On a scale of autonomy, a robot that can react on its own in response to its sensors is highly autonomous. A robot that cannot react, perhaps because it doesn’t have any sensors, is not.”

“It is […] important to note that autonomy and intelligence are not the same thing. A robot can be autonomous but not very smart, like a robot vacuum cleaner. […] A robot vacuum cleaner has a small number of preprogrammed (i.e. instinctive) behaviours and is not capable of any kind of learning […] These are characteristics we would associate with very simple animals. […] When roboticists describe a robot as intelligent, what they mean is ‘a robot that behaves, in some limited sense, as if it were intelligent’. The words as if are important here. […] There are basically two ways in which we can make a robot behave as if it is more intelligent: 1. preprogram a larger number of (instinctive) behaviours; and/or 2. design the robot so that it can learn and therefore develop and grow its own intelligence. The first of these approaches is fine, providing that we know everything there is to know about what the robot must do and all of the situations it will have to respond to while it is working. Typically we can only do this if we design both the robot and its operational environment. […] For unstructured environments, the first approach to robot intelligence above is infeasible simply because it’s impossible to anticipate every possible situation a robot might encounter, especially if it has to interact with humans. The only solution is to design a robot so that it can learn, either from its own experience or from humans or other robots, and therefore adapt and develop its own intelligence: in effect, grow its behavioural repertoire to be able to respond appropriately to more and more situations. This brings us to the subject of learning robots […] robot learning or, more generally, ‘machine learning’ — a branch of AI — has proven to be very much harder than was expected in the early days of Artificial Intelligence.”

“Robot arms on an assembly line are typically programmed to go through a fixed sequence of moves over and over again, for instance spot-welding car body panels, or spray-painting the complete car. These robots are therefore not intelligent. In fact, they often have no exteroceptive sensors at all. […] when we see an assembly line with multiple robot arms positioned on either side along a line, we need to understand that the robots are part of an integrated automated manufacturing system, in which each robot and the line itself have to be carefully programmed in order to coordinate and choreograph the whole operation. […] An important characteristic of assembly-line robots is that they require the working environment to be designed for and around them, i.e. a structured environment. They also need that working environment to be absolutely predictable and repeatable. […] Robot arms either need to be painstakingly programmed, so that the precise movement required of each joint is worked out and coded into a set of instructions for the robot arm or, more often (and rather more easily), ‘taught’ by a human using a control pad to move its end-effector (hand) to the required positions in the robot’s workspace. The robot then memorizes the set of joint movements so that they can be replayed (over and over again). The human operator teaching the robot controls the trajectory, i.e. the path the robot arm’s end-effector follows as it moves through its 3D workspace, and a set of mathematical equations called the ‘inverse kinematics’ converts the trajectory into a set of individual joint movements. Using this approach, it is relatively easy to teach a robot arm to pick up an object and move it smoothly to somewhere else in its workspace while keeping the object level […]. However […] most real-world robot arms are unable to sense the weight of the object and automatically adjust accordingly. They are simply designed with stiff enough joints and strong enough motors that, whatever the weight of the object (providing it’s within the robot’s design limits), it can be lifted, moved, and placed with equal precision. […] The robot arm and gripper are a foundational technology in robotics. Not only are they extremely important as […] industrial assembly-line robot[s], but they have become a ‘component’ in many areas of robotics.”

Planetary rovers are tele-operated mobile robots that present the designer and operator with a number of very difficult challenges. One challenge is power: a planetary rover needs to be energetically self-sufficient for the lifetime of its mission, and must either be launched with a power source or — as in the case of the Mars rovers — fitted with solar panels capable of recharging the rover’s on-board batteries. Another challenge is dependability. Any mechanical fault is likely to mean the end of the rover’s mission, so it needs to be designed and built to exceptional standards of reliability and fail-safety, so that if parts of the rover should fail, the robot can still operate, albeit with reduced functionality. Extremes of temperature are also a problem […] But the greatest challenge is communication. With a round-trip signal delay time of twenty minutes to Mars and back, tele-operating the rover in real time is impossible. If the rover is moving and its human operator in the command centre on Earth reacts to an obstacle, it’s likely to be already too late; the robot will have hit the obstacle by the time the command signal to turn reaches the rover. An obvious answer to this problem would seem to be to give the rover a degree of autonomy so that it could, for instance, plan a path to a rock or feature of interest — while avoiding obstacles — then, when it arrives at the point of interest, call home and wait. Although path-planning algorithms capable of this level of autonomy have been well developed, the risk of a failure of the algorithm (and hence perhaps the whole mission) is deemed so high that in practice the rovers are manually tele-operated, at very low speed, with each manual manoeuvre carefully planned. When one also takes into account the fact that the Mars rovers are contactable only for a three-hour window per Martian day, a traverse of 100 metres will typically take up one day of operation at an average speed of 30 metres per hour.”

“The realization that the behaviour of an autonomous robot is an emergent property of its interactions with the world has important and far-reaching consequences for the way we design autonomous robots. […] when we design robots, and especially when we come to decide what behaviours to programme the robot’s AI with, we cannot think about the robot on its own. We must take into account every detail of the robot’s working environment. […] Like all machines, robots need power. For fixed robots, like the robot arms used for manufacture, power isn’t a problem because the robot is connected to the electrical mains supply. But for mobile robots power is a huge problem because mobile robots need to carry their energy supply around with them, with problems of both the size and weight of the batteries and, more seriously, how to recharge those batteries when they run out. For autonomous robots, the problem is acute because a robot cannot be said to be truly autonomous unless it has energy autonomy as well as computational autonomy; there seems little point in building a smart robot that ‘dies’ when its battery runs out. […] Localization is a[nother] major problem in mobile robotics; in other words, how does a robot know where it is, in 2D or 3D space. […] [One] type of robot learning is called reinforcement learning. […] it is a kind of conditioned learning. If a robot is able to try out several different behaviours, test the success or failure of each behaviour, then ‘reinforce’ the successful behaviours, it is said to have reinforcement learning. Although this sounds straightforward in principle, it is not. It assumes, first, that a robot has at least one successful behaviour in its list of behaviours to try out, and second, that it can test the benefit of each behaviour — in other words, that the behaviour has an immediate measurable reward. If a robot has to try every possible behaviour or if the rewards are delayed, then this kind of so-called ‘unsupervised’ individual robot learning is very slow.”

“A robot is described as humanoid if it has a shape or structure that to some degree mimics the human form. […] A small subset of humanoid robots […] attempt a greater degree of fidelity to the human form and appearance, and these are referred to as android. […] It is a recurring theme of this book that robot intelligence technology lags behind robot mechatronics – and nowhere is the mismatch between the two so starkly evident as it is in android robots. The problem is that if a robot looks convincingly human, then we (not unreasonably) expect it to behave like a human. For this reason whole-body android robots are, at the time of writing, disappointing. […] It is important not to overstate the case for humanoid robots. Without doubt, many potential applications of robots in human work- or living spaces would be better served by non-humanoid robots. The humanoid robot to use human tools argument doesn’t make sense if the job can be done autonomously. It would be absurd, for instance, to design a humanoid robot in order to operate a vacuum cleaner designed for humans. Similarly, if we want a driverless car, it doesn’t make sense to build a humanoid robot that sits in the driver’s seat. It seems that the case for humanoid robots is strongest when the robots are required to work alongside, learn from, and interact closely with humans. […] One of the most compelling reasons why robots should be humanoid is for those applications in which the robot has to interact with humans, work in human workspaces, and use tools or devices designed for humans.”

“…to put it bluntly, sex with a robot might not be safe. As soon as a robot has motors and moving parts, then assuring the safety of human-robot interaction becomes a difficult problem and if that interaction is intimate, the consequences of a mechanical or control systems failure could be serious.”

“All of the potential applications of humanoid robots […] have one thing in common: close interaction between human and robot. The nature of that interaction will be characterized by close proximity and communication via natural human interfaces – speech, gesture, and body language. Human and robot may or may not need to come into physical contact, but even when direct contact is not required they will still need to be within each other’s body space. It follows that robot safety, dependability, and trustworthiness are major issues for the robot designer. […] making a robot safe isn’t the same as making it trustworthy. One person trusts another if, generally speaking, that person is reliable and does what they say they will. So if I were to provide a robot that helps to look after your grandmother and I claim that it is perfectly safe — that it’s been designed to cover every risk or hazard — would you trust it? The answer is probably not. Trust in robots, just as in humans, has to be earned. […for more on these topics, see this post – US] […] trustworthiness cannot just be designed into the robot — it has to be earned by use and by experience. Consider a robot intended to fetch drinks for an elderly person. Imagine that the person calls for a glass of water. The robot then needs to fetch the drink, which may well require the robot to find a glass and fill it with water. Those tasks require sensing, dexterity, and physical manipulation, but they are problems that can be solved with current technology. The problem of trust arises when the robot brings the glass of water to the human. How does the robot give the glass to the human? If the robot has an arm so that it can hold out the glass in the same way a human would, how would the robot know when to let go? The robot clearly needs sensors in order to see and feel when the human has taken hold of the glass. The physical process of a robot handing something to a person is fraught with difficulty. Imagine, for instance, that the robot holds out its arm with the glass but the human can’t reach the glass. How does the robot decide where and how far it would be safe to bring its arm toward the person? What if the human takes hold of the glass but then the glass slips; does the robot let it fall or should it — as a human would — renew its grip on the glass? At what point would the robot decide the transaction has failed: it can’t give the glass of water to the person, or they won’t take it; perhaps they are asleep, or simply forgotten they wanted a glass of water, or confused. How does the robot sense that it should give up and perhaps call for assistance? These are difficult problems in robot cognition. Until they are solved, it’s doubtful we could trust a robot sufficiently well to do even a seemingly simple thing like handing over a glass of water.”

“The fundamental problem with Asimov’s laws of robotics, or any similar construction, is that they require the robot to make judgments. […] they assume that the robot is capable of some level of moral agency. […] No robot that we can currently build, or will build in the foreseeable future, is ‘intelligent’ enough to be able to even recognize, let alone make, these kinds of choices. […] Most roboticists agree that for the foreseeable future robots cannot be ethical, moral agents. […] precisely because, as we have seen, present-day ‘intelligent’ robots are not very intelligent, there is a danger of a gap between what robot users believe those robots to be capable of and what they are actually capable of. Given humans’ propensity to anthropomorphize and form emotional attachments to machines, there is clearly a danger that such vulnerabilities could be either unwittingly or deliberately exploited. Although robots cannot be ethical, roboticists should be.”

“In robotics research, the simulator has become an essential tool of the roboticist’s trade. The reason for this is that designing, building, and testing successive versions of real robots is both expensive and time-consuming, and if part of that work can be undertaken in the virtual rather than the real world, development times can be shortened, and the chances of a robot that works first time substantially improved. A robot simulator has three essential features. First, it must provide a virtual world. Second, it must offer a facility for creating a virtual model of the real robot. And third, it must allow the robot’s controller to be installed and ‘run’ on the virtual robot in the virtual world; the controller then determines how the robot behaves when running in the simulator. The simulator should also provide a visualization of the virtual world and simulated robots in it so that the designer can see what’s going on. […] These are difficult challenges for developers of robot simulators.”

“The next big step in miniaturization […] requires the solution of hugely difficult problems and, in all likelihood, the use of exotic approaches to design and fabrication. […] It is impossible to shrink mechanical and electrical components, or MEMS devices, in order to reduce total robot size to a few micrometres. In any event, the physics of locomotion through a fluid changes at the microscale and simply shrinking mechanical components from macro to micro — even if it were possible — would fail to address this problem. A radical approach is to leave behind conventional materials and components and move to a bioengineered approach in which natural bacteria are modified by adding artificial components. The result is a hybrid of artificial and natural (biological) components. The bacterium has many desirable properties for a microbot. By selecting a bacterium with a flagellum, we have locomotion perfectly suited to the medium. […] Another hugely desirable characteristic is that the bacteria are able to naturally scavenge for energy, thus avoiding the otherwise serious problem of powering the microbots. […] Whatever technology is used to create the microbots, huge problems would have to be overcome before a swarm of medical microbots could become a practical reality. The first is technical: how do surgeons or medical technicians reliably control and monitor the swarm while it’s working inside the body? Or, assuming we can give the microbots sufficient intelligence and autonomy (also a very difficult challenge), do we forgo precise control and human intervention altogether by giving the robots the swarm intelligence to be able to do the job, i.e. find the problem, fix it, then exit? […] these questions bring us to what would undoubtedly represent the greatest challenge: validating the swarm of medical microbots as effective, dependable, and above all safe, then gaining approval and public acceptance for its use. […] Do we treat the validation of the medical microbot swarm as an engineering problem, and attempt to apply the same kinds of methods we would use to validate safety-critical systems such as air traffic control systems? Or do we instead regard the medical microbot swarm as a drug and validate it with conventional and (by and large) trusted processes, including clinical trials, leading to approval and licensing for use? My suspicion is that we will need a new combination of both approaches.”

Links:

E-puck mobile robot.
Jacques de Vaucanson’s Digesting Duck.
Cybernetics.
Alan Turing. W. Ross Ashby. Norbert Wiener. Warren McCulloch. William Grey Walter.
Turtle (robot).
Industrial robot. Mechanical arm. Robotic arm. Robot end effector.
Automated guided vehicle.
Remotely operated vehicle. Unmanned aerial vehicle. Remotely operated underwater vehicle. Wheelbarrow (robot).
Robot-assisted surgery.
Lego Mindstorms NXT. NXT Intelligent Brick.
Biomimetic robots.
Artificial life.
Braitenberg vehicle.
Shakey the robot. Sense-Plan-Act. Rodney Brooks. A robust layered control system for a mobile robot.
Toto the robot.
Slugbot. Ecobot. Microbial fuel cell.
Scratchbot.
Simultaneous localization and mapping (SLAM).
Programming by demonstration.
Evolutionary algorithm.
NASA Robonaut. BERT 2. Kismet (robot). Jules (robot). Frubber. Uncanny valley.
AIBO. Paro.
Cronos Robot. ECCEROBOT.
Swarm robotics. S-bot mobile robot. Swarmanoid project.
Artificial neural network.
Symbrion.
Webots.
Kilobot.
Microelectromechanical systems. I-SWARM project.
ALICE (Artificial Linguistic Internet Computer Entity). BINA 48 (Breakthrough Intelligence via Neural Architecture 48).

Advertisements

June 15, 2018 Posted by | Books, Computer science, Engineering, Medicine | Leave a comment

Computers, People and the Real World

“An exploration of some spectacular failures of modern day computer-aided systems, which fail to take into account the real-world […] Almost nobody wants an IT system. What they want is a better way of doing something, whether that is buying and selling shares on the Stock Exchange, flying an airliner or running a hospital. So the system they want will usually involve changes to the way people work, and interactions with physical objects and the environment. Drawing on examples including the new programme for IT in the NHS, this lecture explores what can go wrong when business change is mistakenly viewed as an IT project.” (Quote from the video description on youtube).

Some links related to the lecture coverage:
Computer-aided dispatch.
London Ambulance Service – computerization.
Report of the Inquiry Into The London Ambulance Service (February 1993).
Sociotechnical system.
Tay (bot).

A few observations/quotes from the lecture (-notes):

The bidder who least understands the complexity of a requirement is likely to put in the lowest bid.
“It is a mistake to use a computer system to impose new work processes on under-trained or reluctant staff. – Front line staff are often best placed to judge what is practical.” [A quote from later in the lecture [~36 mins] is even more explicit: “The experts in any work process are usually the people who have been carrying it out.”]
“It is important to understand that in any system implementation the people factor is as important, and arguably more important, than the technical infrastructure.” (This last one is a full quote from the report linked above; the lecture includes a shortened version – US) [Quotes and observations above from ~16 minute mark unless otherwise noted]

“There is no such thing as an IT project”
“(almost) every significant “IT Project” is actually a business change project that is enabled and supported by one or more IT systems. Business processes are expensive to change. The business changes take at least as long and cost as much as the new IT system, and need at least as much planning and management” [~29 mins]

“Software packages are packaged business processes
*Changing a package to fit the way you want to work can cost more than writing new software” [~31-32 mins]

“Most computer systems interact with people: the sociotechnical view is that the people and the IT are two components of a larger system. Designing that larger system is the real task.” [~36 mins]

May 31, 2018 Posted by | Computer science, Economics, Engineering, Lectures | Leave a comment

Molecular biology (II)

Below I have added some more quotes and links related to the book’s coverage:

“[P]roteins are the most abundant molecules in the body except for water. […] Proteins make up half the dry weight of a cell whereas DNA and RNA make up only 3 per cent and 20 per cent respectively. […] The approximately 20,000 protein-coding genes in the human genome can, by alternative splicing, multiple translation starts, and post-translational modifications, produce over 1,000,000 different proteins, collectively called ‘the proteome‘. It is the size of the proteome and not the genome that defines the complexity of an organism. […] For simple organisms, such as viruses, all the proteins coded by their genome can be deduced from its sequence and these comprise the viral proteome. However for higher organisms the complete proteome is far larger than the genome […] For these organisms not all the proteins coded by the genome are found in any one tissue at any one time and therefore a partial proteome is usually studied. What are of interest are those proteins that are expressed in specific cell types under defined conditions.”

“Enzymes are proteins that catalyze or alter the rate of chemical reactions […] Enzymes can speed up reactions […] but they can also slow some reactions down. Proteins play a number of other critical roles. They are involved in maintaining cell shape and providing structural support to connective tissues like cartilage and bone. Specialized proteins such as actin and myosin are required [for] muscular movement. Other proteins act as ‘messengers’ relaying signals to regulate and coordinate various cell processes, e.g. the hormone insulin. Yet another class of protein is the antibodies, produced in response to foreign agents such as bacteria, fungi, and viruses.”

“Proteins are composed of amino acids. Amino acids are organic compounds with […] an amino group […] and a carboxyl group […] In addition, amino acids carry various side chains that give them their individual functions. The twenty-two amino acids found in proteins are called proteinogenic […] but other amino acids exist that are non-protein functioning. […] A peptide bond is formed between two amino acids by the removal of a water molecule. […] each individual unit in a peptide or protein is known as an amino acid residue. […] Chains of less than 50-70 amino acid residues are known as peptides or polypeptides and >50-70 as proteins, although many proteins are composed of more than one polypeptide chain. […] Proteins are macromolecules consisting of one or more strings of amino acids folded into highly specific 3D-structures. Each amino acid has a different size and carries a different side group. It is the nature of the different side groups that facilitates the correct folding of a polypeptide chain into a functional tertiary protein structure.”

“Atoms scatter the waves of X-rays mainly through their electrons, thus forming secondary or reflected waves. The pattern of X-rays diffracted by the atoms in the protein can be captured on a photographic plate or an image sensor such as a charge coupled device placed behind the crystal. The pattern and relative intensity of the spots on the diffraction image are then used to calculate the arrangement of atoms in the original protein. Complex data processing is required to convert the series of 2D diffraction or scatter patterns into a 3D image of the protein. […] The continued success and significance of this technique for molecular biology is witnessed by the fact that almost 100,000 structures of biological molecules have been determined this way, of which most are proteins.”

“The number of proteins in higher organisms far exceeds the number of known coding genes. The fact that many proteins carry out multiple functions but in a regulated manner is one way a complex proteome arises without increasing the number of genes. Proteins that performed a single role in the ancestral organism have acquired extra and often disparate functions through evolution. […] The active site of an enzyme employed in catalysis is only a small part of the protein, leaving spare capacity for acquiring a second function. […] The glycolytic pathway is involved in the breakdown of sugars such as glucose to release energy. Many of the highly conserved and ancient enzymes from this pathway have developed secondary or ‘moonlighting’ functions. Proteins often change their location in the cell in order to perform a ‘second job’. […] The limited size of the genome may not be the only evolutionary pressure for proteins to moonlight. Combining two functions in one protein can have the advantage of coordinating multiple activities in a cell, enabling it to respond quickly to changes in the environment without the need for lengthy transcription and translational processes.”

Post-translational modifications (PTMs) […] is [a] process that can modify the role of a protein by addition of chemical groups to amino acids in the peptide chain after translation. Addition of phosphate groups (phosphorylation), for example, is a common mechanism for activating or deactivating an enzyme. Other common PTMs include addition of acetyl groups (acetylation), glucose (glucosylation), or methyl groups (methylation). […] Some additions are reversible, facilitating the switching between active and inactive states, and others are irreversible such as marking a protein for destruction by ubiquitin. [The difference between reversible and irreversible modifications can be quite important in pharmacology, and if you’re curious to know more about these topics Coleman’s drug metabolism text provide great coverage of related topics – US.] Diseases caused by malfunction of these modifications highlight the importance of PTMs. […] in diabetes [h]igh blood glucose lead to unwanted glocosylation of proteins. At the high glucose concentrations associated with diabetes, an unwanted irreversible chemical reaction binds the gllucose to amino acid residues such as lysines exposed on the protein surface. The glucosylated proteins then behave badly, cross-linking themselves to the extracellular matrix. This is particularly dangerous in the kidney where it decreases function and can lead to renal failure.”

“Twenty thousand protein-coding genes make up the human genome but for any given cell only about half of these are expressed. […] Many genes get switched off during differentiation and a major mechanism for this is epigenetics. […] an epigenetic trait […] is ‘a stably heritable phenotype resulting from changes in the chromosome without alterations in the DNA sequence’. Epigenetics involves the chemical alteration of DNA by methyl or other small molecular groups to affect the accessibility of a gene by the transcription machinery […] Epigenetics can […] act on gene expression without affecting the stability of the genetic code by modifying the DNA, the histones in chromatin, or a whole chromosome. […] Epigenetic signatures are not only passed on to somatic daughter cells but they can also be transferred through the germline to the offspring. […] At first the evidence appeared circumstantial but more recent studies have provided direct proof of epigenetic changes involving gene methylation being inherited. Rodent models have provided mechanistic evidence. […] the importance of epigenetics in development is highlighted by the fact that low dietary folate, a nutrient essential for methylation, has been linked to higher risk of birth defects in the offspring.” […on the other hand, well…]

The cell cycle is divided into phases […] Transition from G1 into S phase commits the cell to division and is therefore a very tightly controlled restriction point. Withdrawal of growth factors, insufficient nucleotides, or energy to complete DNA replication, or even a damaged template DNA, would compromise the process. Problems are therefore detected and the cell cycle halted by cell cycle inhibitors before the cell has committed to DNA duplication. […] The cell cycle inhibitors inactive the kinases that promote transition through the phases, thus halting the cell cycle. […] The cell cycle can also be paused in S phase to allow time for DNA repairs to be carried out before cell division. The consequences of uncontrolled cell division are so catastrophic that evolution has provided complex checks and balances to maintain fidelity. The price of failure is apoptosis […] 50 to 70 billion cells die every day in a human adult by the controlled molecular process of apoptosis.”

“There are many diseases that arise because a particular protein is either absent or a faulty protein is produced. Administering a correct version of that protein can treat these patients. The first commercially available recombinant protein to be produced for medical use was human insulin to treat diabetes mellitus. […] (FDA) approved the recombinant insulin for clinical use in 1982. Since then over 300 protein-based recombinant pharmaceuticals have been licensed by the FDA and the European Medicines Agency (EMA) […], and many more are undergoing clinical trials. Therapeutic proteins can be produced in bacterial cells but more often mammalian cells such as the Chinese hamster ovary cell line and human fibroblasts are used as these hosts are better able to produce fully functional human protein. However, using mammalian cells is extremely expensive and an alternative is to use live animals or plants. This is called molecular pharming and is an innovative way of producing large amounts of protein relatively cheaply. […] In plant pharming, tobacco, rice, maize, potato, carrots, and tomatoes have all been used to produce therapeutic proteins. […] [One] class of proteins that can be engineered using gene-cloning technology is therapeutic antibodies. […] Therapeutic antibodies are designed to be monoclonal, that is, they are engineered so that they are specific for a particular antigen to which they bind, to block the antigen’s harmful effects. […] Monoclonal antibodies are at the forefront of biological therapeutics as they are highly specific and tend not to induce major side effects.”

“In gene therapy the aim is to restore the function of a faulty gene by introducing a correct version of that gene. […] a cloned gene is transferred into the cells of a patient. Once inside the cell, the protein encoded by the gene is produced and the defect is corrected. […] there are major hurdles to be overcome for gene therapy to be effective. One is the gene construct has to be delivered to the diseased cells or tissues. This can often be difficult […] Mammalian cells […] have complex mechanisms that have evolved to prevent unwanted material such as foreign DNA getting in. Second, introduction of any genetic construct is likely to trigger the patient’s immune response, which can be fatal […] once delivered, expression of the gene product has to be sustained to be effective. One approach to delivering genes to the cells is to use genetically engineered viruses constructed so that most of the viral genome is deleted […] Once inside the cell, some viral vectors such as the retroviruses integrate into the host genome […]. This is an advantage as it provides long-lasting expression of the gene product. However, it also poses a safety risk, as there is little control over where the viral vector will insert into the patient’s genome. If the insertion occurs within a coding gene, this may inactivate gene function. If it integrates close to transcriptional start sites, where promoters and enhancer sequences are located, inappropriate gene expression can occur. This was observed in early gene therapy trials [where some patients who got this type of treatment developed cancer as a result of it. A few more details hereUS] […] Adeno-associated viruses (AAVs) […] are often used in gene therapy applications as they are non-infectious, induce only a minimal immune response, and can be engineered to integrate into the host genome […] However, AAVs can only carry a small gene insert and so are limited to use with genes that are of a small size. […] An alternative delivery system to viruses is to package the DNA into liposomes that are then taken up by the cells. This is safer than using viruses as liposomes do not integrate into the host genome and are not very immunogenic. However, liposome uptake by the cells can be less efficient, resulting in lower expression of the gene.”

Links:

One gene–one enzyme hypothesis.
Molecular chaperone.
Protein turnover.
Isoelectric point.
Gel electrophoresis. Polyacrylamide.
Two-dimensional gel electrophoresis.
Mass spectrometry.
Proteomics.
Peptide mass fingerprinting.
Worldwide Protein Data Bank.
Nuclear magnetic resonance spectroscopy of proteins.
Immunoglobulins. Epitope.
Western blot.
Immunohistochemistry.
Crystallin. β-catenin.
Protein isoform.
Prion.
Gene expression. Transcriptional regulation. Chromatin. Transcription factor. Gene silencing. Histone. NF-κB. Chromatin immunoprecipitation.
The agouti mouse model.
X-inactive specific transcript (Xist).
Cell cycle. Cyclin. Cyclin-dependent kinase.
Retinoblastoma protein pRb.
Cytochrome c. CaspaseBcl-2 family. Bcl-2-associated X protein.
Hybridoma technology. Muromonab-CD3.
Recombinant vaccines and the development of new vaccine strategies.
Knockout mouse.
Adenovirus Vectors for Gene Therapy, Vaccination and Cancer Gene Therapy.
Genetically modified food. Bacillus thuringiensis. Golden rice.

 

May 29, 2018 Posted by | Biology, Books, Chemistry, Diabetes, Engineering, Genetics, Immunology, Medicine, Molecular biology, Pharmacology | Leave a comment

Structural engineering

“The purpose of the book is three-fold. First, I aim to help the general reader appreciate the nature of structure, the role of the structural engineer in man-made structures, and understand better the relationship between architecture and engineering. Second, I provide an overview of how structures work: how they stand up to the various demands made of them. Third, I give students and prospective students in engineering, architecture, and science access to perspectives and qualitative understanding of advanced modern structures — going well beyond the simple statics of most introductory texts. […] Structural engineering is an important part of almost all undergraduate courses in engineering. This book is novel in the use of ‘thought-experiments’ as a straightforward way of explaining some of the important concepts that students often find the most difficult. These include virtual work, strain energy, and maximum and minimum energy principles, all of which are basic to modern computational techniques. The focus is on gaining understanding without the distraction of mathematical detail. The book is therefore particularly relevant for students of civil, mechanical, aeronautical, and aerospace engineering but, of course, it does not cover all of the theoretical detail necessary for completing such courses.”

The above quote is from the book‘s preface. I gave the book 2 stars on goodreads, and I must say that I think David Muir Wood’s book in this series on a similar and closely overlapping topic, civil engineering, was just a significantly better book – if you’re planning on reading only one book on these topics, in my opinion you should pick Wood’s book. I have two main complaints against this book: There’s too much stuff about the aesthetic properties of structures, and the history- and development of the differences between architecture and engineering; and the author seems to think it’s no problem covering quite complicated topics with just analogies and thought experiments, without showing you any of the equations. As for the first point, I don’t really have any interest in aesthetics or architectural history; as for the second, I can handle math reasonably well, but I usually have trouble when people insist on hiding the equations from me and talking only ‘in images’. The absence of equations doesn’t mean the topic coverage is dumbed-down, much; it’s rather the case that the author is trying to cover the sort of material that we usually use mathematics to talk about, because this is the most efficient language to use, using different kinds of language; the problem is that things get lost in the translation. He got rid of the math, but not the complexity. The book does include many illustrations as well, including illustrations of some quite complicated topics and dynamics, but some of the things he talks about in the book are things you can’t illustrate well with images because you ‘run out of dimensions’ before you’ve handled all the relevant aspects/dynamics, an admission he himself makes in the book.

Anyway, the book is not terrible and there’s some interesting stuff in there. I’ve added a few more quotes and some links related to the book’s coverage below.

“All structures span a gap or a space of some kind and their primary role is to transmit the imposed forces safely. A bridge spans an obstruction like a road or a river. The roof truss of a house spans the rooms of the house. The fuselage of a jumbo jet spans between wheels of its undercarriage on the tarmac of an airport terminal and the self-weight, lift and drag forces in flight. The hull of a ship spans between the variable buoyancy forces caused by the waves of the sea. To be fit for purpose every structure has to cope with specific local conditions and perform inside acceptable boundaries of behaviour—which engineers call ‘limit states’. […] Safety is paramount in two ways. First, the risk of a structure totally losing its structural integrity must be very low—for example a building must not collapse or a ship break up. This maximum level of performance is called an ultimate limit state. If a structure should reach that state for whatever reason then the structural engineer tries to ensure that the collapse or break up is not sudden—that there is some degree of warning—but this is not always possible […] Second, structures must be able to do what they were built for—this is called serviceability or performance limit state. So for example a skyscraper building must not sway so much that it causes discomfort to the occupants, even if the risk of total collapse is still very small.”

“At its simplest force is a pull (tension) or a push (compression). […] There are three ways in which materials are strong in different combinations—pulling (tension), pushing (compression), and sliding (shear). Each is very important […] all intact structures have internal forces that balance the external forces acting on them. These external forces come from simple self-weight, people standing, sitting, walking, travelling across them in cars, trucks, and trains, and from the environment such as wind, water, and earthquakes. In that state of equilibrium it turns out that structures are naturally lazy—the energy stored in them is a minimum for that shape or form of structure. Form-finding structures are a special group of buildings that are allowed to find their own shape—subject to certain constraints. There are two classes—in the first, the form-finding process occurs in a model (which may be physical or theoretical) and the structure is scaled up from the model. In the second, the structure is actually built and then allowed to settle into shape. In both cases the structures are self-adjusting in that they move to a position in which the internal forces are in equilibrium and contain minimum energy. […] there is a big problem in using self-adjusting structures in practice. The movements under changing loads can make the structures unfit for purpose. […] Triangles are important in structural engineering because they are the simplest stable form of structure and you see them in all kinds of structures—whether form-finding or not. […] Other forms of pin jointed structure, such as a rectangle, will deform in shear as a mechanism […] unless it has diagonal bracing—making it triangular. […] bending occurs in part of a structure when the forces acting on it tend to make it turn or rotate—but it is constrained or prevented from turning freely by the way it is connected to the rest of the structure or to its foundations. The turning forces may be internal or external.”

“Energy is the capacity of a force to do work. If you stretch an elastic band it has an internal tension force resisting your pull. If you let go of one end the band will recoil and could inflict a sharp sting on your other hand. The internal force has energy or the capacity to do work because you stretched it. Before you let go the energy was potential; after you let go the energy became kinetic. Potential energy is the capacity to do work because of the position of something—in this case because you pulled the two ends of the band apart. […] A car at the top of a hill has the potential energy to roll down the hill if the brakes are released. The potential energy in the elastic band and in a structure has a specific name—it is called ‘strain energy’. Kinetic energy is due to movement, so when you let go of the band […] the potential energy is converted into kinetic energy. Kinetic energy depends on mass and velocity—so a truck can develop more kinetic energy than a small car. When a structure is loaded by a force then the structure moves in whatever way it can to ‘get out of the way’. If it can move freely it will do—just as if you push a car with the handbrake off it will roll forward. However, if the handbrake is on the car will not move, and an internal force will be set up between the point at which you are pushing and the wheels as they grip the road.”

“[A] rope hanging freely as a catenary has minimum energy and […] it can only resist one kind of force—tension. Engineers say that it has one degree of freedom. […] In brief, degrees of freedom are the independent directions in which a structure or any part of a structure can move or deform […] Movements along degrees of freedom define the shape and location of any object at a given time. Each part, each piece of a physical structure whatever its size is a physical object embedded in and connected to other objects […] similar objects which I will call its neighbours. Whatever its size each has the potential to move unless something stops it. Where it may move freely […] then no internal resisting force is created. […] where it is prevented from moving in any direction a reaction force is created with consequential internal forces in the structure. For example at a support to a bridge, where the whole bridge is normally stopped from moving vertically, then an external vertical reaction force develops which must be resisted by a set of internal forces that will depend on the form of the bridge. So inside the bridge structure each piece, however small or large, will move—but not freely. The neighbouring objects will get in the way […]. When this happens internal forces are created as the objects bump up against each other and we represent or model those forces along the pathways which are the local degrees of freedom. The structure has to be strong enough to resist these internal forces along these pathways.”

“The next question is ‘How do we find out how big the forces and movements are?’ It turns out that there is a whole class of structures where this is reasonably straightforward and these are the structures covered in elementary textbooks. Engineers call them ‘statically determinate’ […] For these structures we can find the sizes of the forces just by balancing the internal and external forces to establish equilibrium. […] Unfortunately many real structures can’t be fully explained in this way—they are ‘statically indeterminate‘. This is because whilst establishing equilibrium between internal and external forces is necessary it is not sufficient for finding all of the internal forces. […] The four-legged stool is statically indeterminate. You will begin to understand this if you have ever sat at a fourlegged wobbly table […] which has one leg shorter than the other three legs. There can be no force in that leg because there is no reaction from the ground. What is more, the opposite leg will have no internal force either because otherwise there would be a net turning moment about the line joining the other two legs. Thus the table is balanced on two legs—which is why it wobbles back and forth. […] each leg has one degree of freedom but we have only three ways of balancing them in the (x,y,z) directions. In mathematical terms, we have four unknown  variables (the internal forces) but only three equations (balancing equilibrium in three directions). It follows that there isn’t just one set of forces in equilibrium—indeed, there are many such sets.”

“[W]hen a structure is in equilibrium it has minimum strain energy. […] Strictly speaking, minimum strain energy as a criterion for equilibrium is [however] true only in specific circumstances. To understand this we need to look at the constitutive relations between forces and deformations or displacements. Strain energy is stored potential energy and that energy is the capacity to do work. The strain energy in a body is there because work has been done on it—a force moved through a distance. Hence in order to know the energy we must know how much displacement is caused by a given force. This is called a ‘constitutive relation’ and has the form ‘force equals a constitutive factor times a displacement’. The most common of these relationships is called ‘linear elastic’ where the force equals a simple numerical factor—called the stiffness—times the displacement […] The inverse of the stiffness is called flexibility”.

“Aeroplanes take off or ascend because the lift forces due to the forward motion of the plane exceed the weight […] In level flight or cruise the plane is neutrally buoyant and flies at a steady altitude. […] The structure of an aircraft consists of four sets of tubes: the fuselage, the wings, the tail, and the fin. For obvious reasons their weight needs to be as small as possible. […] Modern aircraft structures are semi-monocoque—meaning stressed skin but with a supporting frame. In other words the skin covering, which may be only a few millimetres thick, becomes part of the structure. […] In an overall sense, the lift and drag forces effectively act on the wings through centres of pressure. The wings also carry the weight of engines and fuel. During a typical flight, the positions of these centres of force vary along the wing—for example as fuel is used. The wings are balanced cantilevers fixed to the fuselage. Longer wings (compared to their width) produce greater lift but are also necessarily heavier—so a compromise is required.”

“When structures move quickly, in particular if they accelerate or decelerate, we have to consider […] the inertia force and the damping force. They occur, for example, as an aeroplane takes off and picks up speed. They occur in bridges and buildings that oscillate in the wind. As these structures move the various bits of the structure remain attached—perhaps vibrating in very complex patterns, but they remain joined together in a state of dynamic equilibrium. An inertia force results from an acceleration or deceleration of an object and is directly proportional to the weight of that object. […] Newton’s 2nd Law tells us that the magnitudes of these [inertial] forces are proportional to the rates of change of momentum. […] Damping arises from friction or ‘looseness’ between components. As a consequence, energy is dissipated into other forms such as heat and sound, and the vibrations get smaller. […] The kinetic energy of a structure in static equilibrium is zero, but as the structure moves its potential energy is converted into kinetic energy. This is because the total energy remains constant by the principle of the conservation of energy (the first law of thermodynamics). The changing forces and displacements along the degree of freedom pathways travel as a wave […]. The amplitude of the wave depends on the nature of the material and the connections between components.”

“For [a] structure to be safe the materials must be strong enough to resist the tension, the compression, and the shear. The strength of materials in tension is reasonably straightforward. We just need to know the limiting forces the material can resist. This is usually specified as a set of stresses. A stress is a force divided by a cross sectional area and represents a localized force over a small area of the material. Typical limiting tensile stresses are called the yield stress […] and the rupture stress—so we just need to know their numerical values from tests. Yield occurs when the material cannot regain its original state, and permanent displacements or strains occur. Rupture is when the material breaks or fractures. […] Limiting average shear stresses and maximum allowable stress are known for various materials. […] Strength in compression is much more difficult […] Modern practice using the finite element method enables us to make theoretical estimates […] but it is still approximate because of the simplifications necessary to do the computer analysis […]. One of the challenges to engineers who rely on finite element analysis is to make sure they understand the implications of the simplifications used.”

“Dynamic loads cause vibrations. One particularly dangerous form of vibration is called resonance […]. All structures have a natural frequency of free vibration. […] Resonance occurs if the frequency of an external vibrating force coincides with the natural frequency of the structure. The consequence is a rapid build up of vibrations that can become seriously damaging. […] Wind is a major source of vibrations. As it flows around a bluff body the air breaks away from the surface and moves in a circular motion like a whirlpool or whirlwind as eddies or vortices. Under certain conditions these vortices may break away on alternate sides, and as they are shed from the body they create pressure differences that cause the body to oscillate. […] a structure is in stable equilibrium when a small perturbation does not result in large displacements. A structure in dynamic equilibrium may oscillate about a stable equilibrium position. […] Flutter is dynamic and a form of wind-excited self-reinforcing oscillation. It occurs, as in the P-delta effect, because of changes in geometry. Forces that are no longer in line because of large displacements tend to modify those displacements of the structure, and these, in turn, modify the forces, and so on. In this process the energy input during a cycle of vibration may be greater than that lost by damping and so the amplitude increases in each cycle until destruction. It is a positive feed-back mechanism that amplifies the initial deformations, causes non-linearity, material plasticity and decreased stiffness, and reduced natural frequency. […] Regular pulsating loads, even very small ones, can cause other problems too through a phenomenon known as fatigue. The word is descriptive—under certain conditions the materials just get tired and crack. A normally ductile material like steel becomes brittle. Fatigue occurs under very small loads repeated many millions of times. All materials in all types of structures have a fatigue limit. […] Fatigue damage occurs deep in the material as microscopic bonds are broken. The problem is particularly acute in the heat affected zones of welded structures.”

“Resilience is the ability of a system to recover quickly from difficult conditions. […] One way of delivering a degree of resilience is to make a structure fail-safe—to mitigate failure if it happens. A household electrical fuse is an everyday example. The fuse does not prevent failure, but it does prevent extreme consequences such as an electrical fire. Damage-tolerance is a similar concept. Damage is any physical harm that reduces the value of something. A damage-tolerant structure is one in which any damage can be accommodated at least for a short time until it can be dealt with. […] human factors in failure are not just a matter of individuals’ slips, lapses, or mistakes but are also the result of organizational and cultural situations which are not easy to identify in advance or even at the time. Indeed, they may only become apparent in hindsight. It follows that another major part of safety is to design a structure so that it can be inspected, repaired, and maintained. Indeed all of the processes of creating a structure, whether conceiving, designing, making, or monitoring performance, have to be designed with sufficient resilience to accommodate unexpected events. In other words, safety is not something a system has (a property), rather it is something a system does (a performance). Providing resilience is a form of control—a way of managing uncertainties and risks.”

Stiffness.
Antoni Gaudí. Heinz Isler. Frei Otto.
Eden Project.
Tensegrity.
Bending moment.
Shear and moment diagram.
Stonehenge.
Pyramid at Meidum.
Vitruvius.
Master builder.
John Smeaton.
Puddling (metallurgy).
Cast iron.
Isambard Kingdom Brunel.
Henry Bessemer. Bessemer process.
Institution of Structural Engineers.
Graphic statics (wiki doesn’t have an article on this topic under this name and there isn’t much here, but it looks like google has a lot if you’re interested).
Constitutive equation.
Deformation (mechanics).
Compatibility (mechanics).
Principle of Minimum Complementary Energy.
Direct stiffness method. Finite element method.
Hogging and sagging.
Centre of buoyancy. Metacentre (fluid mechanics). Angle of attack.
Box girder bridge.
D’Alembert’s principle.
Longeron.
Buckling.
S-n diagram.

April 11, 2018 Posted by | Books, Engineering, Physics | Leave a comment

Networks

I actually think this was a really nice book, considering the format – I gave it four stars on goodreads. One of the things I noticed people didn’t like about it in the reviews is that it ‘jumps’ a bit in terms of topic coverage; it covers a wide variety of applications and analytical settings. I mostly don’t consider this a weakness of the book – even if occasionally it does get a bit excessive – and I can definitely understand the authors’ choice of approach; it’s sort of hard to illustrate the potential the analytical techniques described within this book have if you’re not allowed to talk about all the areas in which they have been – or could be gainfully – applied. A related point is that many people who read the book might be familiar with the application of these tools in specific contexts but have perhaps not thought about the fact that similar methods are applied in many other areas (and they might all of them be a bit annoyed the authors don’t talk more about computer science applications, or foodweb analyses, or infectious disease applications, or perhaps sociometry…). Most of the book is about graph-theory-related stuff, but a very decent amount of the coverage deals with applications, in a broad sense of the word at least, not theory. The discussion of theoretical constructs in the book always felt to me driven to a large degree by their usefulness in specific contexts.

I have covered related topics before here on the blog, also quite recently – e.g. there’s at least some overlap between this book and Holland’s book about complexity theory in the same series (I incidentally think these books probably go well together) – and as I found the book slightly difficult to blog as it was I decided against covering it in as much detail as I sometimes do when covering these texts – this means that I decided to leave out the links I usually include in posts like these.

Below some quotes from the book.

“The network approach focuses all the attention on the global structure of the interactions within a system. The detailed properties of each element on its own are simply ignored. Consequently, systems as different as a computer network, an ecosystem, or a social group are all described by the same tool: a graph, that is, a bare architecture of nodes bounded by connections. […] Representing widely different systems with the same tool can only be done by a high level of abstraction. What is lost in the specific description of the details is gained in the form of universality – that is, thinking about very different systems as if they were different realizations of the same theoretical structure. […] This line of reasoning provides many insights. […] The network approach also sheds light on another important feature: the fact that certain systems that grow without external control are still capable of spontaneously developing an internal order. […] Network models are able to describe in a clear and natural way how self-organization arises in many systems. […] In the study of complex, emergent, and self-organized systems (the modern science of complexity), networks are becoming increasingly important as a universal mathematical framework, especially when massive amounts of data are involved. […] networks are crucial instruments to sort out and organize these data, connecting individuals, products, news, etc. to each other. […] While the network approach eliminates many of the individual features of the phenomenon considered, it still maintains some of its specific features. Namely, it does not alter the size of the system — i.e. the number of its elements — or the pattern of interaction — i.e. the specific set of connections between elements. Such a simplified model is nevertheless enough to capture the properties of the system. […] The network approach [lies] somewhere between the description by individual elements and the description by big groups, bridging the two of them. In a certain sense, networks try to explain how a set of isolated elements are transformed, through a pattern of interactions, into groups and communities.”

“[T]he random graph model is very important because it quantifies the properties of a totally random network. Random graphs can be used as a benchmark, or null case, for any real network. This means that a random graph can be used in comparison to a real-world network, to understand how much chance has shaped the latter, and to what extent other criteria have played a role. The simplest recipe for building a random graph is the following. We take all the possible pair of vertices. For each pair, we toss a coin: if the result is heads, we draw a link; otherwise we pass to the next pair, until all the pairs are finished (this means drawing the link with a probability p = ½, but we may use whatever value of p). […] Nowadays [the random graph model] is a benchmark of comparison for all networks, since any deviations from this model suggests the presence of some kind of structure, order, regularity, and non-randomness in many real-world networks.”

“…in networks, topology is more important than metrics. […] In the network representation, the connections between the elements of a system are much more important than their specific positions in space and their relative distances. The focus on topology is one of its biggest strengths of the network approach, useful whenever topology is more relevant than metrics. […] In social networks, the relevance of topology means that social structure matters. […] Sociology has classified a broad range of possible links between individuals […]. The tendency to have several kinds of relationships in social networks is called multiplexity. But this phenomenon appears in many other networks: for example, two species can be connected by different strategies of predation, two computers by different cables or wireless connections, etc. We can modify a basic graph to take into account this multiplexity, e.g. by attaching specific tags to edges. […] Graph theory [also] allows us to encode in edges more complicated relationships, as when connections are not reciprocal. […] If a direction is attached to the edges, the resulting structure is a directed graph […] In these networks we have both in-degree and out-degree, measuring the number of inbound and outbound links of a node, respectively. […] in most cases, relations display a broad variation or intensity [i.e. they are not binary/dichotomous]. […] Weighted networks may arise, for example, as a result of different frequencies of interactions between individuals or entities.”

“An organism is […] the outcome of several layered networks and not only the deterministic result of the simple sequence of genes. Genomics has been joined by epigenomics, transcriptomics, proteomics, metabolomics, etc., the disciplines that study these layers, in what is commonly called the omics revolution. Networks are at the heart of this revolution. […] The brain is full of networks where various web-like structures provide the integration between specialized areas. In the cerebellum, neurons form modules that are repeated again and again: the interaction between modules is restricted to neighbours, similarly to what happens in a lattice. In other areas of the brain, we find random connections, with a more or less equal probability of connecting local, intermediate, or distant neurons. Finally, the neocortex — the region involved in many of the higher functions of mammals — combines local structures with more random, long-range connections. […] typically, food chains are not isolated, but interwoven in intricate patterns, where a species belongs to several chains at the same time. For example, a specialized species may predate on only one prey […]. If the prey becomes extinct, the population of the specialized species collapses, giving rise to a set of co-extinctions. An even more complicated case is where an omnivore species predates a certain herbivore, and both eat a certain plant. A decrease in the omnivore’s population does not imply that the plant thrives, because the herbivore would benefit from the decrease and consume even more plants. As more species are taken into account, the population dynamics can become more and more complicated. This is why a more appropriate description than ‘foodchains’ for ecosystems is the term foodwebs […]. These are networks in which nodes are species and links represent relations of predation. Links are usually directed (big fishes eat smaller ones, not the other way round). These networks provide the interchange of food, energy, and matter between species, and thus constitute the circulatory system of the biosphere.”

“In the cell, some groups of chemicals interact only with each other and with nothing else. In ecosystems, certain groups of species establish small foodwebs, without any connection to external species. In social systems, certain human groups may be totally separated from others. However, such disconnected groups, or components, are a strikingly small minority. In all networks, almost all the elements of the systems take part in one large connected structure, called a giant connected component. […] In general, the giant connected component includes not less than 90 to 95 per cent of the system in almost all networks. […] In a directed network, the existence of a path from one node to another does not guarantee that the journey can be made in the opposite direction. Wolves eat sheep, and sheep eat grass, but grass does not eat sheep, nor do sheep eat wolves. This restriction creates a complicated architecture within the giant connected component […] according to an estimate made in 1999, more than 90 per cent of the WWW is composed of pages connected to each other, if the direction of edges is ignored. However, if we take direction into account, the proportion of nodes mutually reachable is only 24 per cent, the giant strongly connected component. […] most networks are sparse, i.e. they tend to be quite frugal in connections. Take, for example, the airport network: the personal experience of every frequent traveller shows that direct flights are not that common, and intermediate stops are necessary to reach several destinations; thousands of airports are active, but each city is connected to less than 20 other cities, on average. The same happens in most networks. A measure of this is given by the mean number of connection of their nodes, that is, their average degree.”

“[A] puzzling contradiction — a sparse network can still be very well connected — […] attracted the attention of the Hungarian mathematicians […] Paul Erdős and Alfréd Rényi. They tackled it by producing different realizations of their random graph. In each of them, they changed the density of edges. They started with a very low density: less than one edge per node. It is natural to expect that, as the density increases, more and more nodes will be connected to each other. But what Erdős and Rényi found instead was a quite abrupt transition: several disconnected components coalesced suddenly into a large one, encompassing almost all the nodes. The sudden change happened at one specific critical density: when the average number of links per node (i.e. the average degree) was greater than one, then the giant connected component suddenly appeared. This result implies that networks display a very special kind of economy, intrinsic to their disordered structure: a small number of edges, even randomly distributed between nodes, is enough to generate a large structure that absorbs almost all the elements. […] Social systems seem to be very tightly connected: in a large enough group of strangers, it is not unlikely to find pairs of people with quite short chains of relations connecting them. […] The small-world property consists of the fact that the average distance between any two nodes (measured as the shortest path that connects them) is very small. Given a node in a network […], few nodes are very close to it […] and few are far from it […]: the majority are at the average — and very short — distance. This holds for all networks: starting from one specific node, almost all the nodes are at very few steps from it; the number of nodes within a certain distance increases exponentially fast with the distance. Another way of explaining the same phenomenon […] is the following: even if we add many nodes to a network, the average distance will not increase much; one has to increase the size of a network by several orders of magnitude to notice that the paths to new nodes are (just a little) longer. The small-world property is crucial to many network phenomena. […] The small-world property is something intrinsic to networks. Even the completely random Erdős-Renyi graphs show this feature. By contrast, regular grids do not display it. If the Internet was a chessboard-like lattice, the average distance between two routers would be of the order of 1,000 jumps, and the Net would be much slower [the authors note elsewhere that “The Internet is composed of hundreds of thousands of routers, but just about ten ‘jumps’ are enough to bring an information packet from one of them to any other.”] […] The key ingredient that transforms a structure of connections into a small world is the presence of a little disorder. No real network is an ordered array of elements. On the contrary, there are always connections ‘out of place’. It is precisely thanks to these connections that networks are small worlds. […] Shortcuts are responsible for the small-world property in many […] situations.”

“Body size, IQ, road speed, and other magnitudes have a characteristic scale: that is, an average value that in the large majority of cases is a rough predictor of the actual value that one will find. […] While height is a homogeneous magnitude, the number of social connection[s] is a heterogeneous one. […] A system with this feature is said to be scale-free or scale-invariant, in the sense that it does not have a characteristic scale. This can be rephrased by saying that the individual fluctuations with respect to the average are too large for us to make a correct prediction. […] In general, a network with heterogeneous connectivity has a set of clear hubs. When a graph is small, it is easy to find whether its connectivity is homogeneous or heterogeneous […]. In the first case, all the nodes have more or less the same connectivity, while in the latter it is easy to spot a few hubs. But when the network to be studied is very big […] things are not so easy. […] the distribution of the connectivity of the nodes of the […] network […] is the degree distribution of the graph. […] In homogeneous networks, the degree distribution is a bell curve […] while in heterogeneous networks, it is a power law […]. The power law implies that there are many more hubs (and much more connected) in heterogeneous networks than in homogeneous ones. Moreover, hubs are not isolated exceptions: there is a full hierarchy of nodes, each of them being a hub compared with the less connected ones.”

“Looking at the degree distribution is the best way to check if a network is heterogeneous or not: if the distribution is fat tailed, then the network will have hubs and heterogeneity. A mathematically perfect power law is never found, because this would imply the existence of hubs with an infinite number of connections. […] Nonetheless, a strongly skewed, fat-tailed distribution is a clear signal of heterogeneity, even if it is never a perfect power law. […] While the small-world property is something intrinsic to networked structures, hubs are not present in all kind of networks. For example, power grids usually have very few of them. […] hubs are not present in random networks. A consequence of this is that, while random networks are small worlds, heterogeneous ones are ultra-small worlds. That is, the distance between their vertices is relatively smaller than in their random counterparts. […] Heterogeneity is not equivalent to randomness. On the contrary, it can be the signature of a hidden order, not imposed by a top-down project, but generated by the elements of the system. The presence of this feature in widely different networks suggests that some common underlying mechanism may be at work in many of them. […] the Barabási–Albert model gives an important take-home message. A simple, local behaviour, iterated through many interactions, can give rise to complex structures. This arises without any overall blueprint”.

Homogamy, the tendency of like to marry like, is very strong […] Homogamy is a specific instance of homophily: this consists of a general trend of like to link to like, and is a powerful force in shaping social networks […] assortative mixing [is] a special form of homophily, in which nodes tend to connect with others that are similar to them in the number of connections. By contrast [when] high- and low-degree nodes are more connected to each other [it] is called disassortative mixing. Both cases display a form of correlation in the degrees of neighbouring nodes. When the degrees of neighbours are positively correlated, then the mixing is assortative; when negatively, it is disassortative. […] In random graphs, the neighbours of a given node are chosen completely at random: as a result, there is no clear correlation between the degrees of neighbouring nodes […]. On the contrary, correlations are present in most real-world networks. Although there is no general rule, most natural and technological networks tend to be disassortative, while social networks tend to be assortative. […] Degree assortativity and disassortativity are just an example of the broad range of possible correlations that bias how nodes tie to each other.”

“[N]etworks (neither ordered lattices nor random graphs), can have both large clustering and small average distance at the same time. […] in almost all networks, the clustering of a node depends on the degree of that node. Often, the larger the degree, the smaller the clustering coefficient. Small-degree nodes tend to belong to well-interconnected local communities. Similarly, hubs connect with many nodes that are not directly interconnected. […] Central nodes usually act as bridges or bottlenecks […]. For this reason, centrality is an estimate of the load handled by a node of a network, assuming that most of the traffic passes through the shortest paths (this is not always the case, but it is a good approximation). For the same reason, damaging central nodes […] can impair radically the flow of a network. Depending on the process one wants to study, other definitions of centrality can be introduced. For example, closeness centrality computes the distance of a node to all others, and reach centrality factors in the portion of all nodes that can be reached in one step, two steps, three steps, and so on.”

“Domino effects are not uncommon in foodwebs. Networks in general provide the backdrop for large-scale, sudden, and surprising dynamics. […] most of the real-world networks show a doubled-edged kind of robustness. They are able to function normally even when a large fraction of the network is damaged, but suddenly certain small failures, or targeted attacks, bring them down completely. […] networks are very different from engineered systems. In an airplane, damaging one element is enough to stop the whole machine. In order to make it more resilient, we have to use strategies such as duplicating certain pieces of the plane: this makes it almost 100 per cent safe. In contrast, networks, which are mostly not blueprinted, display a natural resilience to a broad range of errors, but when certain elements fail, they collapse. […] A random graph of the size of most real-world networks is destroyed after the removal of half of the nodes. On the other hand, when the same procedure is performed on a heterogeneous network (either a map of a real network or a scale-free model of a similar size), the giant connected component resists even after removing more than 80 per cent of the nodes, and the distance within it is practically the same as at the beginning. The scene is different when researchers simulate a targeted attack […] In this situation the collapse happens much faster […]. However, now the most vulnerable is the second: while in the homogeneous network it is necessary to remove about one-fifth of its more connected nodes to destroy it, in the heterogeneous one this happens after removing the first few hubs. Highly connected nodes seem to play a crucial role, in both errors and attacks. […] hubs are mainly responsible for the overall cohesion of the graph, and removing a few of them is enough to destroy it.”

“Studies of errors and attacks have shown that hubs keep different parts of a network connected. This implies that they also act as bridges for spreading diseases. Their numerous ties put them in contact with both infected and healthy individuals: so hubs become easily infected, and they infect other nodes easily. […] The vulnerability of heterogeneous networks to epidemics is bad news, but understanding it can provide good ideas for containing diseases. […] if we can immunize just a fraction, it is not a good idea to choose people at random. Most of the times, choosing at random implies selecting individuals with a relatively low number of connections. Even if they block the disease from spreading in their surroundings, hubs will always be there to put it back into circulation. A much better strategy would be to target hubs. Immunizing hubs is like deleting them from the network, and the studies on targeted attacks show that eliminating a small fraction of hubs fragments the network: thus, the disease will be confined to a few isolated components. […] in the epidemic spread of sexually transmitted diseases the timing of the links is crucial. Establishing an unprotected link with a person before they establish an unprotected link with another person who is infected is not the same as doing so afterwards.”

April 3, 2018 Posted by | Biology, Books, Ecology, Engineering, Epidemiology, Genetics, Mathematics, Statistics | Leave a comment

The Internet of Things

 

Some links to stuff he talks about in the lecture:

The Internet of Things: making the most of the Second Digital Revolution – A report by the UK Government Chief Scientific Adviser.
South–North Water Transfer Project.
FDA approves first smart pill that tracks drug regimen compliance from the inside.
The Internet of Things (IoT)* units installed base by category from 2014 to 2020.
Share of the IoT market by sub-sector worldwide in 2017.
San Diego to Cover Half the City with Intelligent Streetlights.
IPv4 and IPv6 (specifically, he talks a little about the IPv4 address space problem).
General Data Protection Regulation (GDPR).
Shodan (website).
Mirai botnet.
Gait analysis.
Website reveals 73,000 unprotected security cameras with default passwords. (This was just an example link – it’s unclear if the site he used to illustrate his point in that part of the lecture was actually Insecam, but he does talk about the widespread use of default passwords and related security implications during the lecture).
Strava’s fitness heatmaps are a ‘potential catastrophe’.
‘Secure by Design’ (a very recently published proposed UK IoT code of practice).

March 26, 2018 Posted by | Computer science, Engineering, Lectures | Leave a comment

The Computer

Below some quotes and links related to the book‘s coverage:

“At the heart of every computer is one or more hardware units known as processors. A processor controls what the computer does. For example, it will process what you type in on your computer’s keyboard, display results on its screen, fetch web pages from the Internet, and carry out calculations such as adding two numbers together. It does this by ‘executing’ a computer program that details what the computer should do […] Data and programs are stored in two storage areas. The first is known as main memory and has the property that whatever is stored there can be retrieved very quickly. Main memory is used for transient data – for example, the result of a calculation which is an intermediate result in a much bigger calculation – and is also used to store computer programs while they are being executed. Data in main memory is transient – it will disappear when the computer is switched off. Hard disk memory, also known as file storage or backing storage, contains data that are required over a period of time. Typical entities that are stored in this memory include files of numerical data, word-processed documents, and spreadsheet tables. Computer programs are also stored here while they are not being executed. […] There are a number of differences between main memory and hard disk memory. The first is the retrieval time. With main memory, an item of data can be retrieved by the processor in fractions of microseconds. With file-based memory, the retrieval time is much greater: of the order of milliseconds. The reason for this is that main memory is silicon-based […] hard disk memory is usually mechanical and is stored on the metallic surface of a disk, with a mechanical arm retrieving the data. […] main memory is more expensive than file-based memory”.

The Internet is a network of computers – strictly, it is a network that joins up a number of networks. It carries out a number of functions. First, it transfers data from one computer to another computer […] The second function of the Internet is to enforce reliability. That is, to ensure that when errors occur then some form of recovery process happens; for example, if an intermediate computer fails then the software of the Internet will discover this and resend any malfunctioning data via other computers. A major component of the Internet is the World Wide Web […] The web […] uses the data-transmission facilities of the Internet in a specific way: to store and distribute web pages. The web consists of a number of computers known as web servers and a very large number of computers known as clients (your home PC is a client). Web servers are usually computers that are more powerful than the PCs that are normally found in homes or those used as office computers. They will be maintained by some enterprise and will contain individual web pages relevant to that enterprise; for example, an online book store such as Amazon will maintain web pages for each item it sells. The program that allows users to access the web is known as a browser. […] A part of the Internet known as the Domain Name System (usually referred to as DNS) will figure out where the page is held and route the request to the web server holding the page. The web server will then send the page back to your browser which will then display it on your computer. Whenever you want another page you would normally click on a link displayed on that page and the process is repeated. Conceptually, what happens is simple. However, it hides a huge amount of detail involving the web discovering where pages are stored, the pages being located, their being sent, the browser reading the pages and interpreting how they should be displayed, and eventually the browser displaying the pages. […] without one particular hardware advance the Internet would be a shadow of itself: this is broadband. This technology has provided communication speeds that we could not have dreamed of 15 years ago. […] Typical broadband speeds range from one megabit per second to 24 megabits per second, the lower rate being about 20 times faster than dial-up rates.”

“A major idea I hope to convey […] is that regarding the computer as just the box that sits on your desk, or as a chunk of silicon that is embedded within some device such as a microwave, is only a partial view. The Internet – or rather broadband access to the Internet – has created a gigantic computer that has unlimited access to both computer power and storage to the point where even applications that we all thought would never migrate from the personal computer are doing just that. […] the Internet functions as a series of computers – or more accurately computer processors – carrying out some task […]. Conceptually, there is little difference between these computers and [a] supercomputer, the only difference is in the details: for a supercomputer the communication between processors is via some internal electronic circuit, while for a collection of computers working together on the Internet the communication is via external circuits used for that network.”

“A computer will consist of a number of electronic circuits. The most important is the processor: this carries out the instructions that are contained in a computer program. […] There are a number of individual circuit elements that make up the computer. Thousands of these elements are combined together to construct the computer processor and other circuits. One basic element is known as an And gate […]. This is an electrical circuit that has two binary inputs A and B and a single binary output X. The output will be one if both the inputs are one and zero otherwise. […] the And gate is only one example – when some action is required, for example adding two numbers together, [the different circuits] interact with each other to carry out that action. In the case of addition, the two binary numbers are processed bit by bit to carry out the addition. […] Whatever actions are taken by a program […] the cycle is the same; an instruction is read into the processor, the processor decodes the instruction, acts on it, and then brings in the next instruction. So, at the heart of a computer is a series of circuits and storage elements that fetch and execute instructions and store data and programs.”

“In essence, a hard disk unit consists of one or more circular metallic disks which can be magnetized. Each disk has a very large number of magnetizable areas which can either represent zero or one depending on the magnetization. The disks are rotated at speed. The unit also contains an arm or a number of arms that can move laterally and which can sense the magnetic patterns on the disk. […] When a processor requires some data that is stored on a hard disk […] then it issues an instruction to find the file. The operating system – the software that controls the computer – will know where the file starts and ends and will send a message to the hard disk to read the data. The arm will move laterally until it is over the start position of the file and when the revolving disk passes under the arm the magnetic pattern that represents the data held in the file is read by it. Accessing data on a hard disk is a mechanical process and usually takes a small number of milliseconds to carry out. Compared with the electronic speeds of the computer itself – normally measured in fractions of a microsecond – this is incredibly slow. Because disk access is slow, systems designers try to minimize the amount of access required to files. One technique that has been particularly effective is known as caching. It is, for example, used in web servers. Such servers store pages that are sent to browsers for display. […] Caching involves placing the frequently accessed pages in some fast storage medium such as flash memory and keeping the remainder on a hard disk.”

“The first computers had a single hardware processor that executed individual instructions. It was not too long before researchers started thinking about computers that had more than one processor. The simple theory here was that if a computer had n processors then it would be n times faster. […] it is worth debunking this notion. If you look at many classes of problems […], you see that a strictly linear increase in performance is not achieved. If a problem that is solved by a single computer is solved in 20 minutes, then you will find a dual processor computer solving it in perhaps 11 minutes. A 3-processor computer may solve it in 9 minutes, and a 4-processor computer in 8 minutes. There is a law of diminishing returns; often, there comes a point when adding a processor slows down the computation. What happens is that each processor needs to communicate with the others, for example passing on the result of a computation; this communicational overhead becomes bigger and bigger as you add processors to the point when it dominates the amount of useful work that is done. The sort of problems where they are effective is where a problem can be split up into sub-problems that can be solved almost independently by each processor with little communication.”

Symmetric encryption methods are very efficient and can be used to scramble large files or long messages being sent from one computer to another. Unfortunately, symmetric techniques suffer from a major problem: if there are a number of individuals involved in a data transfer or in reading a file, each has to know the same key. This makes it a security nightmare. […] public key cryptography removed a major problem associated with symmetric cryptography: that of a large number of keys in existence some of which may be stored in an insecure way. However, a major problem with asymmetric cryptography is the fact that it is very inefficient (about 10,000 times slower than symmetric cryptography): while it can be used for short messages such as email texts, it is far too inefficient for sending gigabytes of data. However, […] when it is combined with symmetric cryptography, asymmetric cryptography provides very strong security. […] One very popular security scheme is known as the Secure Sockets Layer – normally shortened to SSL. It is based on the concept of a one-time pad. […] SSL uses public key cryptography to communicate the randomly generated key between the sender and receiver of a message. This key is only used once for the data interchange that occurs and, hence, is an electronic analogue of a one-time pad. When each of the parties to the interchange has received the key, they encrypt and decrypt the data employing symmetric cryptography, with the generated key carrying out these processes. […] There is an impression amongst the public that the main threats to security and to privacy arise from technological attack. However, the threat from more mundane sources is equally high. Data thefts, damage to software and hardware, and unauthorized access to computer systems can occur in a variety of non-technical ways: by someone finding computer printouts in a waste bin; by a window cleaner using a mobile phone camera to take a picture of a display containing sensitive information; by an office cleaner stealing documents from a desk; by a visitor to a company noting down a password written on a white board; by a disgruntled employee putting a hammer through the main server and the backup server of a company; or by someone dropping an unencrypted memory stick in the street.”

“The basic architecture of the computer has remained unchanged for six decades since IBM developed the first mainframe computers. It consists of a processor that reads software instructions one by one and executes them. Each instruction will result in data being processed, for example by being added together; and data being stored in the main memory of the computer or being stored on some file-storage medium; or being sent to the Internet or to another computer. This is what is known as the von Neumann architecture; it was named after John von Neumann […]. His key idea, which still holds sway today, is that in a computer the data and the program are both stored in the computer’s memory in the same address space. There have been few challenges to the von Neumann architecture.”

[A] ‘neural network‘ […] consists of an input layer that can sense various signals from some environment […]. In the middle (hidden layer), there are a large number of processing elements (neurones) which are arranged into sub-layers. Finally, there is an output layer which provides a result […]. It is in the middle layer that the work is done in a neural computer. What happens is that the network is trained by giving it examples of the trend or item that is to be recognized. What the training does is to strengthen or weaken the connections between the processing elements in the middle layer until, when combined, they produce a strong signal when a new case is presented to them that matches the previously trained examples and a weak signal when an item that does not match the examples is encountered. Neural networks have been implemented in hardware, but most of the implementations have been via software where the middle layer has been implemented in chunks of code that carry out the learning process. […] although the initial impetus was to use ideas in neurobiology to develop neural architectures based on a consideration of processes in the brain, there is little resemblance between the internal data and software now used in commercial implementations and the human brain.”

Links:

Computer.
Byte. Bit.
Moore’s law.
Computer program.
Programming language. High-level programming language. Low-level programming language.
Zombie (computer science).
Therac-25.
Cloud computing.
Instructions per second.
ASCII.
Fetch-execute cycle.
Grace Hopper. Software Bug.
Transistor. Integrated circuit. Very-large-scale integration. Wafer (electronics). Photomask.
Read-only memory (ROM). Read-write memory (RWM). Bus (computing). Address bus. Programmable read-only memory (PROM). Erasable programmable read-only memory (EPROM). Electrically erasable programmable read-only memory (EEPROM). Flash memory. Dynamic random-access memory (DRAM). Static random-access memory (static RAM/SRAM).
Hard disc.
Miniaturization.
Wireless communication.
Radio-frequency identification (RFID).
Metadata.
NP-hardness. Set partition problem. Bin packing problem.
Routing.
Cray X-MP. Beowulf cluster.
Vector processor.
Folding@home.
Denial-of-service attack. Melissa (computer virus). Malware. Firewall (computing). Logic bomb. Fork bomb/rabbit virus. Cryptography. Caesar cipher. Social engineering (information security).
Application programming interface.
Data mining. Machine translation. Machine learning.
Functional programming.
Quantum computing.

March 19, 2018 Posted by | Books, Computer science, Cryptography, Engineering | Leave a comment

Safety-Critical Systems

Some related links to topics covered in the lecture:

Safety-critical system.
Safety engineering.
Fault tree analysis.
Failure mode and effects analysis.
Fail-safe.
Value of a statistical life.
ALARP principle.
Hazards and Risk (HSA).
Software system safety.
Aleatoric and epistemic uncertainty.
N-version programming.
An experimental evaluation of the assumption of independence in multiversion programming (Knight & Leveson).
Safety integrity level.
Software for Dependable Systems – Sufficient Evidence? (consensus study report).

March 15, 2018 Posted by | Computer science, Economics, Engineering, Lectures, Statistics | Leave a comment

The Ice Age (II)

I really liked the book, recommended if you’re at all interested in this kind of stuff. Below some observations from the book’s second half, and some related links:

“Charles MacLaren, writing in 1842, […] argued that the formation of large ice sheets would result in a fall in sea level as water was taken from the oceans and stored frozen on the land. This insight triggered a new branch of ice age research – sea level change. This topic can get rather complicated because as ice sheets grow, global sea level falls. This is known as eustatic sea level change. As ice sheets increase in size, their weight depresses the crust and relative sea level will rise. This is known as isostatic sea level change. […] It is often quite tricky to differentiate between regional-scale isostatic factors and the global-scale eustatic sea level control.”

“By the late 1870s […] glacial geology had become a serious scholarly pursuit with a rapidly growing literature. […] [In the late 1880s] Carvill Lewis […] put forward the radical suggestion that the [sea] shells at Moel Tryfan and other elevated localities (which provided the most important evidence for the great marine submergence of Britain) were not in situ. Building on the earlier suggestions of Thomas Belt (1832–78) and James Croll, he argued that these materials had been dredged from the sea bed by glacial ice and pushed upslope so that ‘they afford no testimony to the former subsidence of the land’. Together, his recognition of terminal moraines and the reworking of marine shells undermined the key pillars of Lyell’s great marine submergence. This was a crucial step in establishing the primacy of glacial ice over icebergs in the deposition of the drift in Britain. […] By the end of the 1880s, it was the glacial dissenters who formed the eccentric minority. […] In the period leading up to World War One, there was [instead] much debate about whether the ice age involved a single phase of ice sheet growth and freezing climate (the monoglacial theory) or several phases of ice sheet build up and decay separated by warm interglacials (the polyglacial theory).”

“As the Earth rotates about its axis travelling through space in its orbit around the Sun, there are three components that change over time in elegant cycles that are entirely predictable. These are known as eccentricity, precession, and obliquity or ‘stretch, wobble, and roll’ […]. These orbital perturbations are caused by the gravitational pull of the other planets in our Solar System, especially Jupiter. Milankovitch calculated how each of these orbital cycles influenced the amount of solar radiation received at different latitudes over time. These are known as Milankovitch Cycles or Croll–Milankovitch Cycles to reflect the important contribution made by both men. […] The shape of the Earth’s orbit around the Sun is not constant. It changes from an almost circular orbit to one that is mildly elliptical (a slightly stretched circle) […]. This orbital eccentricity operates over a 400,000- and 100,000-year cycle. […] Changes in eccentricity have a relatively minor influence on the total amount of solar radiation reaching the Earth, but they are important for the climate system because they modulate the influence of the precession cycle […]. When eccentricity is high, for example, axial precession has a greater impact on seasonality. […] The Earth is currently tilted at an angle of 23.4° to the plane of its orbit around the Sun. Astronomers refer to this axial tilt as obliquity. This angle is not fixed. It rolls back and forth over a 41,000-year cycle from a tilt of 22.1° to 24.5° and back again […]. Even small changes in tilt can modify the strength of the seasons. With a greater angle of tilt, for example, we can have hotter summers and colder winters. […] Cooler, reduced insolation summers are thought to be a key factor in the initiation of ice sheet growth in the middle and high latitudes because they allow more snow to survive the summer melt season. Slightly warmer winters may also favour ice sheet build-up as greater evaporation from a warmer ocean will increase snowfall over the centres of ice sheet growth. […] The Earth’s axis of rotation is not fixed. It wobbles like a spinning top slowing down. This wobble traces a circle on the celestial sphere […]. At present the Earth’s rotational axis points toward Polaris (the current northern pole star) but in 11,000 years it will point towards another star, Vega. This slow circling motion is known as axial precession and it has important impacts on the Earth’s climate by causing the solstices and equinoxes to move around the Earth’s orbit. In other words, the seasons shift over time. Precession operates over a 19,000- and 23,000-year cycle. This cycle is often referred to as the Precession of the Equinoxes.”

The albedo of a surface is a measure of its ability to reflect solar energy. Darker surfaces tend to absorb most of the incoming solar energy and have low albedos. The albedo of the ocean surface in high latitudes is commonly about 10 per cent — in other words, it absorbs 90 per cent of the incoming solar radiation. In contrast, snow, glacial ice, and sea ice have much higher albedos and can reflect between 50 and 90 per cent of incoming solar energy back into the atmosphere. The elevated albedos of bright frozen surfaces are a key feature of the polar radiation budget. Albedo feedback loops are important over a range of spatial and temporal scales. A cooling climate will increase snow cover on land and the extent of sea ice in the oceans. These high albedo surfaces will then reflect more solar radiation to intensify and sustain the cooling trend, resulting in even more snow and sea ice. This positive feedback can play a major role in the expansion of snow and ice cover and in the initiation of a glacial phase. Such positive feedbacks can also work in reverse when a warming phase melts ice and snow to reveal dark and low albedo surfaces such as peaty soil or bedrock.”

“At the end of the Cretaceous, around 65 million years ago (Ma), lush forests thrived in the Polar Regions and ocean temperatures were much warmer than today. This warm phase continued for the next 10 million years, peaking during the Eocene thermal maximum […]. From that time onwards, however, Earth’s climate began a steady cooling that saw the initiation of widespread glacial conditions, first in Antarctica between 40 and 30 Ma, in Greenland between 20 and 15 Ma, and then in the middle latitudes of the northern hemisphere around 2.5 Ma. […] Over the past 55 million years, a succession of processes driven by tectonics combined to cool our planet. It is difficult to isolate their individual contributions or to be sure about the details of cause and effect over this long period, especially when there are uncertainties in dating and when one considers the complexity of the climate system with its web of internal feedbacks.” [Potential causes which have been highlighted include: The uplift of the Himalayas (leading to increased weathering, leading over geological time to an increased amount of CO2 being sequestered in calcium carbonate deposited on the ocean floor, lowering atmospheric CO2 levels), the isolation of Antarctica which created the Antarctic Circumpolar Current (leading to a cooling of Antarctica), the dry-out of the Mediterranean Sea ~5mya (which significantly lowered salt concentrations in the World Ocean, meaning that sea water froze at a higher temperature), and the formation of the Isthmus of Panama. – US].

“[F]or most of the last 1 million years, large ice sheets were present in the middle latitudes of the northern hemisphere and sea levels were lower than today. Indeed, ‘average conditions’ for the Quaternary Period involve much more ice than present. The interglacial peaks — such as the present Holocene interglacial, with its ice volume minima and high sea level — are the exception rather than the norm. The sea level maximum of the Last Interglacial (MIS 5) is higher than today. It also shows that cold glacial stages (c.80,000 years duration) are much longer than interglacials (c.15,000 years). […] Arctic willow […], the northernmost woody plant on Earth, is found in central European pollen records from the last glacial stage. […] For most of the Quaternary deciduous forests have been absent from most of Europe. […] the interglacial forests of temperate Europe that are so familiar to us today are, in fact, rather atypical when we consider the long view of Quaternary time. Furthermore, if the last glacial period is representative of earlier ones, for much of the Quaternary terrestrial ecosystems were continuously adjusting to a shifting climate.”

“Greenland ice cores typically have very clear banding […] that corresponds to individual years of snow accumulation. This is because the snow that falls in summer under the permanent Arctic sun differs in texture to the snow that falls in winter. The distinctive paired layers can be counted like tree rings to produce a finely resolved chronology with annual and even seasonal resolution. […] Ice accumulation is generally much slower in Antarctica, so the ice core record takes us much further back in time. […] As layers of snow become compacted into ice, air bubbles recording the composition of the atmosphere are sealed in discrete layers. This fossil air can be recovered to establish the changing concentration of greenhouse gases such as carbon dioxide (CO2) and methane (CH4). The ice core record therefore allows climate scientists to explore the processes involved in climate variability over very long timescales. […] By sampling each layer of ice and measuring its oxygen isotope composition, Dansgaard produced an annual record of air temperature for the last 100,000 years. […] Perhaps the most startling outcome of this work was the demonstration that global climate could change extremely rapidly. Dansgaard showed that dramatic shifts in mean air temperature (>10°C) had taken place in less than a decade. These findings were greeted with scepticism and there was much debate about the integrity of the Greenland record, but subsequent work from other drilling sites vindicated all of Dansgaard’s findings. […] The ice core records from Greenland reveal a remarkable sequence of abrupt warming and cooling cycles within the last glacial stage. These are known as Dansgaard–Oeschger (D–O) cycles. […] [A] series of D–O cycles between 65,000 and 10,000 years ago [caused] mean annual air temperatures on the Greenland ice sheet [to be] shifted by as much as 10°C. Twenty-five of these rapid warming events have been identified during the last glacial period. This discovery dispelled the long held notion that glacials were lengthy periods of stable and unremitting cold climate. The ice core record shows very clearly that even the glacial climate flipped back and forth. […] D–O cycles commence with a very rapid warming (between 5 and 10°C) over Greenland followed by a steady cooling […] Deglaciations are rapid because positive feedbacks speed up both the warming trend and ice sheet decay. […] The ice core records heralded a new era in climate science: the study of abrupt climate change. Most sedimentary records of ice age climate change yield relatively low resolution information — a thousand years may be packed into a few centimetres of marine or lake sediment. In contrast, ice cores cover every year. They also retain a greater variety of information about the ice age past than any other archive. We can even detect layers of volcanic ash in the ice and pinpoint the date of ancient eruptions.”

“There are strong thermal gradients in both hemispheres because the low latitudes receive the most solar energy and the poles the least. To redress these imbalances the atmosphere and oceans move heat polewards — this is the basis of the climate system. In the North Atlantic a powerful surface current takes warmth from the tropics to higher latitudes: this is the famous Gulf Stream and its northeastern extension the North Atlantic Drift. Two main forces drive this current: the strong southwesterly winds and the return flow of colder, saltier water known as North Atlantic Deep Water (NADW). The surface current loses much of its heat to air masses that give maritime Europe a moist, temperate climate. Evaporative cooling also increases its salinity so that it begins to sink. As the dense and cold water sinks to the deep ocean to form NADW, it exerts a strong pull on the surface currents to maintain the cycle. It returns south at depths >2,000 m. […] The thermohaline circulation in the North Atlantic was periodically interrupted during Heinrich Events when vast discharges of melting icebergs cooled the ocean surface and reduced its salinity. This shut down the formation of NADW and suppressed the Gulf Stream.”

Links:

Archibald Geikie.
Andrew Ramsay (geologist).
Albrecht Penck. Eduard BrücknerGunz glaciation. Mindel glaciation. Riss glaciation. Würm.
Insolation.
Perihelion and aphelion.
Deep Sea Drilling Project.
Foraminifera.
δ18O. Isotope fractionation.
Marine isotope stage.
Cesare Emiliani.
Nicholas Shackleton.
Brunhes–Matuyama reversal. Geomagnetic reversal. Magnetostratigraphy.
Climate: Long range Investigation, Mapping, and Prediction (CLIMAP).
Uranium–thorium dating. Luminescence dating. Optically stimulated luminescence. Cosmogenic isotope dating.
The role of orbital forcing in the Early-Middle Pleistocene Transition (paper).
European Project for Ice Coring in Antarctica (EPICA).
Younger Dryas.
Lake Agassiz.
Greenland ice core project (GRIP).
J Harlen Bretz. Missoula Floods.
Pleistocene megafauna.

February 25, 2018 Posted by | Astronomy, Engineering, Geology, History, Paleontology, Physics | Leave a comment

Lakes (II)

(I have had some computer issues over the last couple of weeks, which was the explanation for my brief blogging hiatus, but they should be resolved by now and as I’m already starting to fall quite a bit behind in terms of my intended coverage of the books I’ve read this year I hope to get rid of some of the backlog in the days to come.)

I have added some more observations from the second half of the book, as well as some related links, below.

“[R]ecycling of old plant material is especially important in lakes, and one way to appreciate its significance is to measure the concentration of CO2, an end product of decomposition, in the surface waters. This value is often above, sometimes well above, the value to be expected from equilibration of this gas with the overlying air, meaning that many lakes are net producers of CO2 and that they emit this greenhouse gas to the atmosphere. How can that be? […] Lakes are not sealed microcosms that function as stand-alone entities; on the contrary, they are embedded in a landscape and are intimately coupled to their terrestrial surroundings. Organic materials are produced within the lake by the phytoplankton, photosynthetic cells that are suspended in the water and that fix CO2, release oxygen (O2), and produce biomass at the base of the aquatic food web. Photosynthesis also takes place by attached algae (the periphyton) and submerged water plants (aquatic macrophytes) that occur at the edge of the lake where enough sunlight reaches the bottom to allow their growth. But additionally, lakes are the downstream recipients of terrestrial runoff from their catchments […]. These continuous inputs include not only water, but also subsidies of plant and soil organic carbon that are washed into the lake via streams, rivers, groundwater, and overland flows. […] The organic carbon entering lakes from the catchment is referred to as ‘allochthonous’, meaning coming from the outside, and it tends to be relatively old […] In contrast, much younger organic carbon is available […] as a result of recent photosynthesis by the phytoplankton and littoral communities; this carbon is called ‘autochthonous’, meaning that it is produced within the lake.”

“It used to be thought that most of the dissolved organic matter (DOM) entering lakes, especially the coloured fraction, was unreactive and that it would transit the lake to ultimately leave unchanged at the outflow. However, many experiments and field observations have shown that this coloured material can be partially broken down by sunlight. These photochemical reactions result in the production of CO2, and also the degradation of some of the organic polymers into smaller organic molecules; these in turn are used by bacteria and decomposed to CO2. […] Most of the bacterial species in lakes are decomposers that convert organic matter into mineral end products […] This sunlight-driven chemistry begins in the rivers, and continues in the surface waters of the lake. Additional chemical and microbial reactions in the soil also break down organic materials and release CO2 into the runoff and ground waters, further contributing to the high concentrations in lake water and its emission to the atmosphere. In algal-rich ‘eutrophic’ lakes there may be sufficient photosynthesis to cause the drawdown of CO2 to concentrations below equilibrium with the air, resulting in the reverse flux of this gas, from the atmosphere into the surface waters.”

“There is a precarious balance in lakes between oxygen gains and losses, despite the seemingly limitless quantities in the overlying atmosphere. This balance can sometimes tip to deficits that send a lake into oxygen bankruptcy, with the O2 mostly or even completely consumed. Waters that have O2 concentrations below 2mg/L are referred to as ‘hypoxic’, and will be avoided by most fish species, while waters in which there is a complete absence of oxygen are called ‘anoxic’ and are mostly the domain for specialized, hardy microbes. […] In many temperate lakes, mixing in spring and again in autumn are the critical periods of re-oxygenation from the overlying atmosphere. In summer, however, the thermocline greatly slows down that oxygen transfer from air to deep water, and in cooler climates, winter ice-cover acts as another barrier to oxygenation. In both of these seasons, the oxygen absorbed into the water during earlier periods of mixing may be rapidly consumed, leading to anoxic conditions. Part of the reason that lakes are continuously on the brink of anoxia is that only limited quantities of oxygen can be stored in water because of its low solubility. The concentration of oxygen in the air is 209 millilitres per litre […], but cold water in equilibrium with the atmosphere contains only 9ml/L […]. This scarcity of oxygen worsens with increasing temperature (from 4°C to 30°C the solubility of oxygen falls by 43 per cent), and it is compounded by faster rates of bacterial decomposition in warmer waters and thus a higher respiratory demand for oxygen.”

“Lake microbiomes play multiple roles in food webs as producers, parasites, and consumers, and as steps into the animal food chain […]. These diverse communities of microbes additionally hold centre stage in the vital recycling of elements within the lake ecosystem […]. These biogeochemical processes are not simply of academic interest; they totally alter the nutritional value, mobility, and even toxicity of elements. For example, sulfate is the most oxidized and also most abundant form of sulfur in natural waters, and it is the ion taken up by phytoplankton and aquatic plants to meet their biochemical needs for this element. These photosynthetic organisms reduce the sulfate to organic sulfur compounds, and once they die and decompose, bacteria convert these compounds to the rotten-egg smelling gas, H2S, which is toxic to most aquatic life. In anoxic waters and sediments, this effect is amplified by bacterial sulfate reducers that directly convert sulfate to H2S. Fortunately another group of bacteria, sulfur oxidizers, can use H2S as a chemical energy source, and in oxygenated waters they convert this reduced sulfur back to its benign, oxidized, sulfate form. […] [The] acid neutralizing capacity (or ‘alkalinity’) varies greatly among lakes. Many lakes in Europe, North America, and Asia have been dangerously shifted towards a low pH because they lacked sufficient carbonate to buffer the continuous input of acid rain that resulted from industrial pollution of the atmosphere. The acid conditions have negative effects on aquatic animals, including by causing a shift in aluminium to its more soluble and toxic form Al3+. Fortunately, these industrial emissions have been regulated and reduced in most of the developed world, although there are still legacy effects of acid rain that have resulted in a long-term depletion of carbonates and associated calcium in certain watersheds.”

“Rotifers, cladocerans, and copepods are all planktonic, that is their distribution is strongly affected by currents and mixing processes in the lake. However, they are also swimmers, and can regulate their depth in the water. For the smallest such as rotifers and copepods, this swimming ability is limited, but the larger zooplankton are able to swim over an impressive depth range during the twenty-four-hour ‘diel’ (i.e. light–dark) cycle. […] the cladocerans in Lake Geneva reside in the thermocline region and deep epilimnion during the day, and swim upwards by about 10m during the night, while cyclopoid copepods swim up by 60m, returning to the deep, dark, cold waters of the profundal zone during the day. Even greater distances up and down the water column are achieved by larger animals. The opossum shrimp, Mysis (up to 25mm in length) lives on the bottom of lakes during the day and in Lake Tahoe it swims hundreds of metres up into the surface waters, although not on moon-lit nights. In Lake Baikal, one of the main zooplankton species is the endemic amphipod, Macrohectopus branickii, which grows up to 38mm in size. It can form dense swarms at 100–200m depth during the day, but the populations then disperse and rise to the upper waters during the night. These nocturnal migrations connect the pelagic surface waters with the profundal zone in lake ecosystems, and are thought to be an adaptation towards avoiding visual predators, especially pelagic fish, during the day, while accessing food in the surface waters under the cover of nightfall. […] Although certain fish species remain within specific zones of the lake, there are others that swim among zones and access multiple habitats. […] This type of fish migration means that the different parts of the lake ecosystem are ecologically connected. For many fish species, moving between habitats extends all the way to the ocean. Anadromous fish migrate out of the lake and swim to the sea each year, and although this movement comes at considerable energetic cost, it has the advantage of access to rich marine food sources, while allowing the young to be raised in the freshwater environment with less exposure to predators. […] With the converse migration pattern, catadromous fish live in freshwater and spawn in the sea.”

“Invasive species that are the most successful and do the most damage once they enter a lake have a number of features in common: fast growth rates, broad tolerances, the capacity to thrive under high population densities, and an ability to disperse and colonize that is enhanced by human activities. Zebra mussels (Dreissena polymorpha) get top marks in each of these categories, and they have proven to be a troublesome invader in many parts of the world. […] A single Zebra mussel can produce up to one million eggs over the course of a spawning season, and these hatch into readily dispersed larvae (‘veligers’), that are free-swimming for up to a month. The adults can achieve densities up to hundreds of thousands per square metre, and their prolific growth within water pipes has been a serious problem for the cooling systems of nuclear and thermal power stations, and for the intake pipes of drinking water plants. A single Zebra mussel can filter a litre a day, and they have the capacity to completely strip the water of bacteria and protists. In Lake Erie, the water clarity doubled and diatoms declined by 80–90 per cent soon after the invasion of Zebra mussels, with a concomitant decline in zooplankton, and potential impacts on planktivorous fish. The invasion of this species can shift a lake from dominance of the pelagic to the benthic food web, but at the expense of native unionid clams on the bottom that can become smothered in Zebra mussels. Their efficient filtering capacity may also cause a regime shift in primary producers, from turbid waters with high concentrations of phytoplankton to a clearer lake ecosystem state in which benthic water plants dominate.”

“One of the many distinguishing features of H2O is its unusually high dielectric constant, meaning that it is a strongly polar solvent with positive and negative charges that can stabilize ions brought into solution. This dielectric property results from the asymmetrical electron cloud over the molecule […] and it gives liquid water the ability to leach minerals from rocks and soils as it passes through the ground, and to maintain these salts in solution, even at high concentrations. Collectively, these dissolved minerals produce the salinity of the water […] Sea water is around 35ppt, and its salinity is mainly due to the positively charged ions sodium (Na+), potassium (K+), magnesium (Mg2+), and calcium (Ca2+), and the negatively charged ions chloride (Cl), sulfate (SO42-), and carbonate CO32-). These solutes, collectively called the ‘major ions’, conduct electrons, and therefore a simple way to track salinity is to measure the electrical conductance of the water between two electrodes set a known distance apart. Lake and ocean scientists now routinely take profiles of salinity and temperature with a CTD: a submersible instrument that records conductance, temperature, and depth many times per second as it is lowered on a rope or wire down the water column. Conductance is measured in Siemens (or microSiemens (µS), given the low salt concentrations in freshwater lakes), and adjusted to a standard temperature of 25°C to give specific conductivity in µS/cm. All freshwater lakes contain dissolved minerals, with specific conductivities in the range 50–500µS/cm, while salt water lakes have values that can exceed sea water (about 50,000µS/cm), and are the habitats for extreme microbes”.

“The World Register of Dams currently lists 58,519 ‘large dams’, defined as those with a dam wall of 15m or higher; these collectively store 16,120km3 of water, equivalent to 213 years of flow of Niagara Falls on the USA–Canada border. […] Around a hundred large dam projects are in advanced planning or construction in Africa […]. More than 300 dams are planned or under construction in the Amazon Basin of South America […]. Reservoirs have a number of distinguishing features relative to natural lakes. First, the shape (‘morphometry’) of their basins is rarely circular or oval, but instead is often dendritic, with a tree-like main stem and branches ramifying out into the submerged river valleys. Second, reservoirs typically have a high catchment area to lake area ratio, again reflecting their riverine origins. For natural lakes, this ratio is relatively low […] These proportionately large catchments mean that reservoirs have short water residence times, and water quality is much better than might be the case in the absence of this rapid flushing. Nonetheless, noxious algal blooms can develop and accumulate in isolated bays and side-arms, and downstream next to the dam itself. Reservoirs typically experience water level fluctuations that are much larger and more rapid than in natural lakes, and this limits the development of littoral plants and animals. Another distinguishing feature of reservoirs is that they often show a longitudinal gradient of conditions. Upstream, the river section contains water that is flowing, turbulent, and well mixed; this then passes through a transition zone into the lake section up to the dam, which is often the deepest part of the lake and may be stratified and clearer due to decantation of land-derived particles. In some reservoirs, the water outflow is situated near the base of the dam within the hypolimnion, and this reduces the extent of oxygen depletion and nutrient build-up, while also providing cool water for fish and other animal communities below the dam. There is increasing attention being given to careful regulation of the timing and magnitude of dam outflows to maintain these downstream ecosystems. […] The downstream effects of dams continue out into the sea, with the retention of sediments and nutrients in the reservoir leaving less available for export to marine food webs. This reduction can also lead to changes in shorelines, with a retreat of the coastal delta and intrusion of seawater because natural erosion processes can no longer be offset by resupply of sediments from upstream.”

“One of the most serious threats facing lakes throughout the world is the proliferation of algae and water plants caused by eutrophication, the overfertilization of waters with nutrients from human activities. […] Nutrient enrichment occurs both from ‘point sources’ of effluent discharged via pipes into the receiving waters, and ‘nonpoint sources’ such the runoff from roads and parking areas, agricultural lands, septic tank drainage fields, and terrain cleared of its nutrient- and water-absorbing vegetation. By the 1970s, even many of the world’s larger lakes had begun to show worrying signs of deterioration from these sources of increasing enrichment. […] A sharp drop in water clarity is often among the first signs of eutrophication, although in forested areas this effect may be masked for many years by the greater absorption of light by the coloured organic materials that are dissolved within the lake water. A drop in oxygen levels in the bottom waters during stratification is another telltale indicator of eutrophication, with the eventual fall to oxygen-free (anoxic) conditions in these lower strata of the lake. However, the most striking impact with greatest effect on ecosystem services is the production of harmful algal blooms (HABs), specifically by cyanobacteria. In eutrophic, temperate latitude waters, four genera of bloom-forming cyanobacteria are the usual offenders […]. These may occur alone or in combination, and although each has its own idiosyncratic size, shape, and lifestyle, they have a number of impressive biological features in common. First and foremost, their cells are typically full of hydrophobic protein cases that exclude water and trap gases. These honeycombs of gas-filled chambers, called ‘gas vesicles’, reduce the density of the cells, allowing them to float up to the surface where there is light available for growth. Put a drop of water from an algal bloom under a microscope and it will be immediately apparent that the individual cells are extremely small, and that the bloom itself is composed of billions of cells per litre of lake water.”

“During the day, the [algal] cells capture sunlight and produce sugars by photosynthesis; this increases their density, eventually to the point where they are heavier than the surrounding water and sink to more nutrient-rich conditions at depth in the water column or at the sediment surface. These sugars are depleted by cellular respiration, and this loss of ballast eventually results in cells becoming less dense than water and floating again towards the surface. This alternation of sinking and floating can result in large fluctuations in surface blooms over the twenty-four-hour cycle. The accumulation of bloom-forming cyanobacteria at the surface gives rise to surface scums that then can be blown into bays and washed up onto beaches. These dense populations of colonies in the water column, and especially at the surface, can shade out bottom-dwelling water plants, as well as greatly reduce the amount of light for other phytoplankton species. The resultant ‘cyanobacterial dominance’ and loss of algal species diversity has negative implications for the aquatic food web […] This negative impact on the food web may be compounded by the final collapse of the bloom and its decomposition, resulting in a major drawdown of oxygen. […] Bloom-forming cyanobacteria are especially troublesome for the management of drinking water supplies. First, there is the overproduction of biomass, which results in a massive load of algal particles that can exceed the filtration capacity of a water treatment plant […]. Second, there is an impact on the taste of the water. […] The third and most serious impact of cyanobacteria is that some of their secondary compounds are highly toxic. […] phosphorus is the key nutrient limiting bloom development, and efforts to preserve and rehabilitate freshwaters should pay specific attention to controlling the input of phosphorus via point and nonpoint discharges to lakes.”

Ultramicrobacteria.
The viral shunt in marine foodwebs.
Proteobacteria. Alphaproteobacteria. Betaproteobacteria. Gammaproteobacteria.
Mixotroph.
Carbon cycle. Nitrogen cycle. AmmonificationAnammox. Comammox.
Methanotroph.
Phosphorus cycle.
Littoral zone. Limnetic zone. Profundal zone. Benthic zone. Benthos.
Phytoplankton. Diatom. Picoeukaryote. Flagellates. Cyanobacteria.
Trophic state (-index).
Amphipoda. Rotifer. Cladocera. Copepod. Daphnia.
Redfield ratio.
δ15N.
Thermistor.
Extremophile. Halophile. Psychrophile. Acidophile.
Caspian Sea. Endorheic basin. Mono Lake.
Alpine lake.
Meromictic lake.
Subglacial lake. Lake Vostock.
Thermus aquaticus. Taq polymerase.
Lake Monoun.
Microcystin. Anatoxin-a.

 

 

February 2, 2018 Posted by | Biology, Books, Botany, Chemistry, Ecology, Engineering, Zoology | Leave a comment

Rivers (II)

Some more observations from the book and related links below.

“By almost every measure, the Amazon is the greatest of all the large rivers. Encompassing more than 7 million square kilometres, its drainage basin is the largest in the world and makes up 5% of the global land surface. The river accounts for nearly one-fifth of all the river water discharged into the oceans. The flow is so great that water from the Amazon can still be identified 125 miles out in the Atlantic […] The Amazon has some 1,100 tributaries, and 7 of these are more than 1,600 kilometres long. […] In the lowlands, most Amazonian rivers have extensive floodplains studded with thousands of shallow lakes. Up to one-quarter of the entire Amazon Basin is periodically flooded, and these lakes become progressively connected with each other as the water level rise.”

“To hydrologists, the term ‘flood’ refers to a river’s annual peak discharge period, whether the water inundates the surrounding landscape or not. In more common parlance, however, a flood is synonymous with the river overflowing it’s banks […] Rivers flood in the natural course of events. This often occurs on the floodplain, as the name implies, but flooding can affect almost all of the length of the river. Extreme weather, particularly heavy or protracted rainfall, is the most frequent cause of flooding. The melting of snow and ice is another common cause. […] River floods are one of the most common natural hazards affecting human society, frequently causing social disruption, material damage, and loss of life. […] Most floods have a seasonal element in their occurence […] It is a general rule that the magnitude of a flood is inversely related to its frequency […] Many of the less predictable causes of flooding occur after a valley has been blocked by a natural dam as a result of a landslide, glacier, or lava flow. Natural dams may cause upstream flooding as the blocked river forms a lake and downstream flooding as a result of failure of the dam.”

“The Tigris-Euphrates, Nile, and Indus are all large, exotic river systems, but in other respects they are quite different. The Nile has a relatively gentle gradient in Egypt and a channel that has experienced only small changes over the last few thousand years, by meander cut-off and a minor shift eastwards. The river usually flooded in a regular and predictable way. The stability and long continuity of the Egyptian civilization may be a reflection of its river’s relative stability. The steeper channel of the Indus, by contrast, has experienced major avulsions over great distances on the lower Indus Plain and some very large floods caused by the failure of glacier ice dams in the Himalayan mountains. Likely explanations for the abandonment of many Harappan cities […] take account of damage caused by major floods and/or the disruption caused by channel avulsion leading to a loss of water supply. Channel avulsion was also a problem for the Sumerian civilization on the alluvial plain called Mesopotamia […] known for the rise and fall of its numerous city states. Most of these cities were situated along the Euphrates River, probably because it was more easily controlled for irrigation purposes than the Tigris, which flowed faster and carried much more water. However, the Euphrates was an anastomosing river with multiple channels that diverge and rejoin. Over time, individual branch channels ceased to flow as others formed, and settlements located on these channels inevitably declined and were abandoned as their water supply ran dry, while others expanded as their channels carried greater amounts of water.”

“During the colonization of the Americas in the mid-18th century and the imperial expansion into Africa and Asia in the late 19th century, rivers were commonly used as boundaries because they were the first, and frequently the only, features mapped by European explorers. The diplomats in Europe who negotiated the allocation of colonial territories claimed by rival powers knew little of the places they were carving up. Often, their limited knowledge was based solely on maps that showed few details, rivers being the only distinct physical features marked. Today, many international river boundaries remain as legacies of those historical decisions based on poor geographical knowledge because states have been reluctant to alter their territorial boundaries from original delimitation agreements. […] no less than three-quarters of the world’s international boundaries follow rivers for at least part of their course. […] approximately 60% of the world’s fresh water is drawn from rivers shared by more than one country.”

“The sediments carried in rivers, laid down over many years, represent a record of the changes that have occurred in the drainage basin through the ages. Analysis of these sediments is one way in which physical geographers can interpret the historical development of landscapes. They can study the physical and chemical characteristics of the sediments itself and/or the biological remains they contain, such as pollen or spores. […] The simple rate at which material is deposited by a river can be a good reflection of how conditions have changed in the drainage basin. […] Pollen from surrounding plants is often found in abundance in fluvial sediments, and the analysis of pollen can yield a great deal of information about past conditions in an area. […] Very long sediment cores taken from lakes and swamps enable us to reconstruct changes in vegetation over very long time periods, in some cases over a million years […] Because climate is a strong determinant of vegetation, pollen analysis has also proved to be an important method for tracing changes in past climates.”

“The energy in flowing and falling water has been harnessed to perform work by turning water-wheels for more than 2,000 years. The moving water turns a large wheel and a shaft connected to the wheel axle transmits the power from the water through a system of gears and cogs to work machinery, such as a millstone to grind corn. […] The early medieval watermill was able to do the work of between 30 and 60 people, and by the end of the 10th century in Europe, waterwheels were commonly used in a wide range of industries, including powering forge hammers, oil and silk mills, sugar-cane crushers, ore-crushing mills, breaking up bark in tanning mills, pounding leather, and grinding stones. Nonetheless, most were still used for grinding grains for preparation into various types of food and drink. The Domesday Book, a survey prepared in England in AD 1086, lists 6,082 watermills, although this is probably a conservative estimate because many mills were not recorded in the far north of the country. By 1300, this number had risen to exceed 10,000. [..] Medieval watermills typically powered their wheels by using a dam or weir to concentrate the falling water and pond a reserve supply. These modifications to rivers became increasingly common all over Europe, and by the end of the Middle Ages, in the mid-15th century, watermills were in use on a huge number of rivers and streams. The importance of water power continued into the Industrial Revolution […]. The early textile factories were built to produce cloth using machines driven by waterwheels, so they were often called mills. […] [Today,] about one-third of all countries rely on hydropower for more than half their electricity. Globally, hydropower provides about 20% of the world’s total electricity supply.”

“Deliberate manipulation of river channels through engineering works, including dam construction, diversion, channelization, and culverting, […] has a long history. […] In Europe today, almost 80% of the total discharge of the continent’s major rivers is affected by measures designed to regulate flow, whether for drinking water supply, hydroelectric power generation, flood control, or any other reason. The proportion in individual countries is higher still. About 90% of rivers in the UK are regulated as a result of these activities, while in the Netherlands this percentage is close to 100. By contrast, some of the largest rivers on other continents, including the Amazon and the Congo, are hardly manipulated at all. […] Direct and intentional modifications to rivers are complemented by the impacts of land use and land use changes which frequently result in the alteration of rivers as an unintended side effect. Deforestation, afforestation, land drainage, agriculture, and the use of fire have all had significant impacts, with perhaps the most extreme effects produced by construction activity and urbanization. […] The major methods employed in river regulation are the construction of large dams […], the building of run-of-river impoundments such as weirs and locks, and by channelization, a term that covers a range of river engineering works including widening, deepening, straightening, and the stabilization of banks. […] Many aspects of a dynamic river channel and its associated ecosystems are mutually adjusting, so a human activity in a landscape that affects the supply of water or sediment is likely to set off a complex cascade of other alterations.”

“The methods of storage (in reservoirs) and distribution (by canal) have not changed fundamentally since the earliest river irrigation schemes, with the exception of some contemporary projects’ use of pumps to distribute water over greater distances. Nevertheless, many irrigation canals still harness the force of gravity. Half the world’s large dams (defined as being 15 metres or higher) were built exclusively or primarily for irrigation, and about one-third of the world’s irrigated cropland relies on reservoir water. In several countries, including such populous nations as India and China, more than 50% of arable land is irrigated by river water supplied from dams. […] Sadly, many irrigation schemes are not well managed and a number of environmental problems are frequently experienced as a result, both on-site and off-site. In many large networks of irrigation canals, less than half of the water diverted from a river or reservoir actually benefits crops. A lot of water seeps away through unlined canals or evaporates before reaching the fields. Some also runs off the fields or infiltrates through the soil, unused by plants, because farmers apply too much water or at the wrong time. Much of this water seeps back into nearby streams or joins underground aquifers, so can be used again, but the quality of water may deteriorate if it picks up salts, fertilizers, or pesticides. Excessive applications of irrigation water often result in rising water tables beneath fields, causing salinization and waterlogging. These processes reduce crop yields on irrigation schemes all over the world.”

“[Deforestation can contribute] to the degradation of aquatic habitats in numerous ways. The loss of trees along river banks can result in changes in the species found in the river because fewer trees means a decline in plant matter and insects falling from them, items eaten by some fish. Fewer trees on river banks also results in less shade. More sunlight reaching the river results in warmer water and the enhanced growth of algae. A change in species can occur as fish that feed on falling food are edged out by those able to feed on algae. Deforestation also typically results in more runoff and more soil erosion. This sediment may cover spawning grounds, leading to lower reproduction rates. […] Grazing and trampling by livestock reduces vegetation cover and causes the compaction of soil, which reduces its infiltration capacity. As rainwater passes over or through the soil in areas of intensive agriculture, it picks up residues from pesticides and fertilizers and transport them to rivers. In this way, agriculture has become a leading source of river pollution in certain parts of the world. Concentration of nitrates and phosphates, derived from fertilizers, have risen notably in many rivers in Europe and North America since the 1950s and have led to a range of […] problems encompassed under the term ‘eutrophication’ – the raising of biological productivity caused by nutrient enrichment. […] In slow-moving rivers […] the growth of algae reduces light penetration and depletes the oxygen in the water, sometimes causing fish kills.”

“One of the most profound ways in which people alter rivers is by damming them. Obstructing a river and controlling its flow in this way brings about a raft of changes. A dam traps sediments and nutrients, alters the river’s temperature and chemistry, and affects the processes of erosion and deposition by which the river sculpts the landscape. Dams create more uniform flow in rivers, usually by reducing peak flows and increasing minimum flows. Since the natural variation in flow is important for river ecosystems and their biodiversity, when dams even out flows the result is commonly fewer fish of fewer species. […] the past 50 years or so has seen a marked escalation in the rate and scale of construction of dams all over the world […]. At the beginning of the 21st century, there were about 800,000 dams worldwide […] In some large river systems, the capacity of dams is sufficient to hold more than the entire annual discharge of the river. […] Globally, the world’s major reservoirs are thought to control about 15% of the runoff from the land. The volume of water trapped worldwide in reservoirs of all sizes is no less than five times the total global annual river flow […] Downstream of a reservoir, the hydrological regime of a river is modified. Discharge, velocity, water quality, and thermal characteristics are all affected, leading to changes in the channel and its landscape, plants, and animals, both on the river itself and in deltas, estuaries, and offshore. By slowing the flow of river water, a dam acts as a trap for sediment and hence reduces loads in the river downstream. As a result, the flow downstream of the dam is highly erosive. A relative lack of silt arriving at a river’s delta can result in more coastal erosion and the intrusion of seawater that brings salt into delta ecosystems. […] The dam-barrier effect on migratory fish and their access to spawning grounds has been recognized in Europe since medieval times.”

“One of the most important effects cities have on rivers is the way in which urbanization affects flood runoff. Large areas of cities are typically impermeable, being covered by concrete, stone, tarmac, and bitumen. This tends to increase the amount of runoff produced in urban areas, an effect exacerbated by networks of storm drains and sewers. This water carries relatively little sediment (again, because soil surfaces have been covered by impermeable materials), so when it reaches a river channel it typically causes erosion and widening. Larger and more frequent floods are another outcome of the increase in runoff generated by urban areas. […] It […] seems very likely that efforts to manage the flood hazard on the Mississippi have contributed to an increased risk of damage from tropical storms on the Gulf of Mexico coast. The levées built along the river have contributed to the loss of coastal wetlands, starving them of sediment and fresh water, thereby reducing their dampening effect on storm surge levels. This probably enhanced the damage from Hurricane Katrina which struck the city of New Orleans in 2005.”

Links:

Onyx River.
Yangtze. Yangtze floods.
Missoula floods.
Murray River.
Ganges.
Thalweg.
Southeastern Anatolia Project.
Water conflict.
Hydropower.
Fulling mill.
Maritime transport.
Danube.
Lock (water navigation).
Hydrometry.
Yellow River.
Aswan High Dam. Warragamba Dam. Three Gorges Dam.
Onchocerciasis.
River restoration.

January 16, 2018 Posted by | Biology, Books, Ecology, Engineering, Geography, Geology, History | Leave a comment

Civil engineering (II)

Some more quotes and links:

“Major earthquakes occur every year in different parts of the world. The various continents that make up the surface of the Earth are moving slowly relative to each other. The rough boundaries between the tectonic plates try to resist this relative motion but eventually the energy stored in the interface (or geological fault) becomes too big to resist and slip occurs, releasing the energy. The energy travels as a wave through the crust of the Earth, shaking the ground as it passes. The speed at which the wave travels depends on the stiffness and density of the material through which it is passing. Topographic effects may concentrate the energy of the shaking. Mexico City sits on the bed of a former lake, surrounded by hills. Once the energy reaches this bowl-like location it becomes trapped and causes much more damage than would be experienced if the city were sitting on a flat plain without the surrounding mountains. Designing a building to withstand earthquake shaking is possible, provided we have some idea about the nature and magnitude and geological origin of the loadings. […] Heavy mud or tile roofs on flimsy timber walls are a disaster – the mass of the roof sways from side to side as it picks up energy from the shaking ground and, in collapsing, flattens the occupants. Provision of some diagonal bracing to prevent the structure from deforming when it is shaken can be straightforward. Shops like to have open spaces for ground floor display areas. There are often post-earthquake pictures of buildings which have lost a storey as this unbraced ground floor structure collapsed. […] Earthquakes in developing countries tend to attract particular coverage. The extent of the damage caused is high because the enforcement of design codes (if they exist) is poor. […] The majority of the damage in Haiti was the result of poor construction and the total lack of any building code requirements.”

“[A]n aircraft is a large structure, and the structural design is subject to the same laws of equilibrium and material behaviour as any structure which is destined never to leave the ground. […] The A380 is an enormous structure, some 25 m high, 73 m long and with a wingspan of about 80 m […]. For comparison, St Paul’s Cathedral in London is 73 m wide at the transept; and the top of the inner dome, visible from inside the cathedral, is about 65 m above the floor of the nave. […] The rules of structural mechanics that govern the design of aircraft structures are no different from those that govern the design of structures that are intended to remain on the ground. In the mid 20th century many aircraft and civil structural engineers would not have recognized any serious intellectual boundary between their activities. The aerodynamic design of an aircraft ensures smooth flow of air over the structure to reduce resistance and provide lift. Bridges in exposed places are not in need of lift but can benefit from reduced resistance to air flow resulting from the use of continuous hollow sections (box girders) rather than trusses to form the deck. The stresses can also flow more smoothly within the box, and the steel be used more efficiently. Testing of potential box girder shapes in wind tunnels helps to check the influence of the presence of the ground or water not far below the deck on the character of the wind flow.”

“Engineering is concerned with finding solutions to problems. The initial problems faced by the engineer relate to the identification of the set of functional criteria which truly govern the design and which will be generated by the client or the promoter of the project. […] The more forcefully the criteria are stated the less freedom the design engineer will have in the search for an appropriate solution. Design is the translation of ideas into achievement. […] The designer starts with (or has access to) a mental store of solutions previously adopted for related problems and then seeks to compromise as necessary in order to find the optimum solution satisfying multiple criteria. The design process will often involve iteration of concept and technology and the investigation of radically different solutions and may also require consultation with the client concerning the possibility of modification of some of the imposed functional criteria if the problem has been too tightly defined. […] The term technology is being used here to represent that knowledge and those techniques which will be necessary in order to realize the concept; recognizing that a concept which has no appreciation of the technologies available for construction may require the development of new technologies in order that it may be realized. Civil engineering design continues through the realization of the project by the constructor or contractor. […] The process of design extends to the eventual assessment of the performance of the completed project as perceived by the client or user (who may not have been party to the original problem definition).”

“An arch or vault curved only in one direction transmits loads by means of forces developed within the thickness of the structure which then push outwards at the boundaries. A shell structure is a generalization of such a vault which is curved in more than one direction. An intact eggshell is very stiff under any loading applied orthogonally (at right angles) to the shell. If the eggshell is broken it becomes very flexible and to stiffen it again restraint is required along the free edge to replace the missing shell. The techniques of prestressing concrete permit the creation of very exciting and daring shell structures with extraordinarily small thickness but the curvatures of the shells and the shapes of the edges dictate the support requirements.”

“In the 19th century it was quicker to travel from Rome to Ancona by sea round the southern tip of the boot of Italy (a distance of at least 2000 km) than to travel overland, a distance of some 200 km as the crow flies. Land-based means of transport require infrastructure that must be planned and constructed and then maintained. Even today water transport is used on a large scale for bulky or heavy items for which speed is not necessary.”

“High speed rail works well (economically) in areas such as Europe and Japan where there is adequate infrastructure in the destination cities for access to and from the railway stations. In parts of the world – such as much of the USA – where the distances are much greater, population densities lower, railway networks much less developed, and local transport in cities much less coordinated (and the motor car has dominated for far longer) the economic case for high speed rail is harder to make. The most successful schemes for high speed rail have involved construction of new routes with dedicated track for the high speed trains with straighter alignments, smoother curves, and gentler gradients than conventional railways – and consequent reduced scope for delays resulting from mixing of high speed and low speed trains on the same track”.

“The Millennium Bridge is a suspension bridge with a very low sag-to-span ratio which lends itself very readily to sideways oscillation. There are plenty of rather bouncy suspension footbridges around the world but the modes of vibration are predominantly those in the plane of the bridge, involving vertical movements. Modes which involve lateral movement and twisting of the deck are always there but being out-of-plane may be overlooked. The more flexible the bridge in any mode of deformation, the more movement there is when people walk across. There is a tendency for people to vary their pace to match the movements of the bridge. Such an involuntary feedback mechanism is guaranteed to lead to resonance of the structure and continued build-up of movements. There will usually be some structural limitation on the magnitude of the oscillations – as the geometry of the bridge changes so the natural frequency will change subtly – but it can still be a bit alarming for the user. […] The Millennium Bridge was stabilized (retrofitted) by the addition of restraining members and additional damping mechanisms to prevent growth of oscillation and to move the natural frequency of this mode of vibration away from the likely frequencies of human footfall. The revised design […] ensured that dynamic response would be acceptable for crowd loading up to two people per square metre. At this density walking becomes difficult so it is seen as a conservative criterion.”

“The development of appropriately safe systems requires that […] parallel control systems should be truly independent so that they are not likely to fail simultaneously. Robustness is thus about ensuring that safety can be maintained even when some elements of the system cease to operate. […] There is a human element in all systems, providing some overall control and an ability to react in critical circumstances. The human intervention is particularly important where all electronic or computer control systems are eliminated and the clock is ticking inexorably towards disaster. Although ultimately whenever a structural failure occurs there is some purely mechanical explanation – some element of the structure was overloaded because some mode of response had been overlooked – there is often a significant human factor which must be considered. We may think that we fully understand the mechanical operation, but may neglect to ensure that the human elements are properly controlled. A requirement for robustness implies both that the damage consequent on the removal of a single element of the structure or system should not be disproportionate (mechanical or structural robustness) but also that the project should not be jeopardized by human failure (organizational robustness). […] A successful civil engineering project is likely to have evident robustness in concept, technology, and realization. A concept which is unclear, a technology in its infancy, and components of realization which lacks coherence will all contribute to potential disaster.”

“Tunnelling inevitably requires removal of ground from the face with a tendency for the ground above and ahead of the tunnel to fall into the gap. The success of the tunnelling operation can be expressed in terms of the volume loss: the proportion of the volume of the tunnel which is unintentionally excavated causing settlement at the ground surface – the smaller this figure the better. […] How can failure of the tunnel be avoided? One route to assurance will be to perform numerical analysis of the tunnel construction process with close simulations of all the stages of excavation and loading of the new structure. Computer analyses are popular because they appear simple to perform, even in three dimensions. However, such analyses can be no more reliable than the models of soil behaviour on which they are based and on the way in which the rugged detail of construction is translated into numerical instructions. […] Whatever one’s confidence in the numerical analysis it will obviously not be a bad idea to observe the tunnel while it is being constructed. Obvious things to observe include tunnel convergence – the change in the cross-section of the tunnel in different directions – and movements at the ground surface and existing buildings over the tunnel. […] observation is not of itself sufficient unless there is some structured strategy for dealing with the observations. At Heathrow […] the data were not interpreted until after the failure had occurred. It was then clear that significant and undesirable movements had been occurring and could have been detected at least two months before the failure.”

“Fatigue is a term used to describe a failure which develops as a result of repeated loading – possibly over many thousands or millions of cycles. […] Fatigue cannot be avoided, and the rate of development of damage may not be easy to predict. It often requires careful techniques of inspection to identify the presence of incipient cracks which may eventually prove structurally devastating.”

“Some projects would clearly be regarded as failures – a dam bursts, a flood protection dyke is overtopped, a building or bridge collapses. In each case there is the possibility of a technical description of the processes leading to the failure – in the end the strength of the material in some location has been exceeded by the demands of the applied loads or the load carrying paths have been disrupted. But failure can also be financial or economic. Such failures are less evident: a project that costs considerably more than the original estimate has in some way failed to meet its expectations. A project that, once built, is quite unable to generate the revenue that was expected in order to justify the original capital outlay has also failed.”

1999 Jiji earthquake.
Taipei 101. Tuned mass damper.
Tacoma Narrows Bridge (1940). Brooklyn Bridge. Golden Gate Bridge.
Sydney Opera House. Jørn Utzon. Ove Arup. Christiani & Nielsen.
Bell Rock Lighthouse. Northern Lighthouse Board. Richard Henry Brunton.
Panama Canal. Culebra Cut. Gatun Lake. Panamax.
Great Western Railway.
Shinkansen. TGV.
Ronan Point.
New Austrian tunnelling method.
Crossrail.
Fukushima Daiichi nuclear disaster.
Turnkey project. Unit price contract.
Colin Buchanan.
Dongtan.

December 21, 2017 Posted by | Books, Economics, Engineering, Geology | Leave a comment

Civil engineering (I)

I have included some quotes from the first half of the book below, and some links related to the book’s coverage:

“Today, the term ‘civil engineering’ distinguishes the engineering of the provision of infrastructure from […] many other branches of engineering that have come into existence. It thus has a somewhat narrower scope now than it had in the 18th and early 19th centuries. There is a tendency to define it by exclusion: civil engineering is not mechanical engineering, not electrical engineering, not aeronautical engineering, not chemical engineering… […] Civil engineering today is seen as encompassing much of the infrastructure of modern society provided it does not move – roads, buildings, dams, tunnels, drains, airports (but not aeroplanes or air traffic control), railways (but not railway engines or signalling), power stations (but not turbines). The fuzzy definition of civil engineering as the engineering of infrastructure […] should make us recognize that there are no precise boundaries and that any practising engineer is likely to have to communicate across whatever boundaries appear to have been created. […] The boundary with science is also fuzzy. Engineering is concerned with the solution of problems now, and cannot necessarily wait for the underlying science to catch up. […] All engineering is concerned with finding solutions to problems for which there is rarely a single answer. Presented with an appropriate ‘solution-neutral problem definition’ the engineer needs to find ways of applying existing or emergent technologies to the solution of the problem.”

“[T]he behaviour of the soil or other materials that make up the ground in its natural state is rather important to engineers. However, although it can be guessed from exploratory probings and from knowledge of the local geological history, the exact nature of the ground can never be discovered before construction begins. By contrast, road embankments are formed of carefully prepared soils; and water-retaining dams may also be constructed from selected soils and rocks – these can be seen as ‘designer soils’. […] Soils are formed of mineral particles packed together with surrounding voids – the particles can never pack perfectly. […] The voids around the soil particles are filled with either air or water or a mixture of the two. In northern climes the ground is saturated with water for much of the time. For deformation of the soil to occur, any change in volume must be accompanied by movement of water through and out of the voids. Clay particles are small, the surrounding voids are small, and movement of water through these voids is slow – the permeability is said to be low. If a new load, such as a bridge deck or a tall building, is to be constructed, the ground will want to react to the new loads. A clayey soil will be unable to react instantly because of the low permeability and, as a result, there will be delayed deformations as the water is squeezed out of the clay ground and the clay slowly consolidates. The consolidation of a thick clay layer may take centuries to approach completion.”

“Rock (or stone) is a good construction material. Evidently there are different types of rock with different strengths and different abilities to resist the decay that is encouraged by sunshine, moisture, and frost, but rocks are generally strong, dimensionally stable materials: they do not shrink or twist with time. We might measure the strength of a type of rock in terms of the height of a column of that rock that will just cause the lowest layer of the rock to crush: on such a scale sandstone would have a strength of about 2 kilometres, good limestone about 4 kilometres. A solid pyramid 150 m high uses quite a small proportion of this available strength. […] Iron has been used for several millennia for elements such as bars and chain links which might be used in conjunction with other structural materials, particularly stone. Stone is very strong when compressed, or pushed, but not so strong in tension: when it is pulled cracks may open up. The provision of iron links between adjacent stone blocks can help to provide some tensile strength. […] Cast iron can be formed into many different shapes and is resistant to rust but is brittle – when it breaks it loses all its strength very suddenly. Wrought iron, a mixture of iron with a low proportion of carbon, is more ductile – it can be stretched without losing all its strength – and can be beaten or rolled (wrought) into simple shapes. Steel is a mixture of iron with a higher proportion of carbon than wrought iron and with other elements […] which provide particular mechanical benefits. Mild steel has a remarkable ductility – a tolerance of being stretched – which results from its chemical composition and which allows it to be rolled into sheets or extruded into chosen shapes without losing its strength and stiffness. There are limits on the ratio of the quantities of carbon and other elements to that of the iron itself in order to maintain these desirable properties for the mixture. […] Steel is very strong and stiff in tension or pulling: steel wire and steel cables are obviously very well suited for hanging loads.”

“As concrete sets, the chemical reactions that turn a sloppy mixture of cement and water and stones into a rock-like solid produce a lot of heat. If a large volume of concrete is poured without any special precautions then, as it cools down, having solidified, it will shrink and crack. The Hoover Dam was built as a series of separate concrete columns of limited dimension through which pipes carrying cooling water were passed in order to control the temperature rise. […] Concrete is mixed as a heavy fluid with no strength until it starts to set. Embedding bars of a material such as steel, which is strong in tension, in the fluid concrete gives some tensile strength. Reinforced concrete is used today for huge amounts of construction throughout the world. When the amount of steel present in the concrete is substantial, additives are used to encourage the fresh concrete to flow through intricate spaces and form a good bond with the steel. For the steel to start to resist tensile loads it has to stretch a little; if the concrete around the steel also stretches it may crack. The concrete has little reliable tensile strength and is intended to protect the steel. The concrete can be used more efficiently if the steel reinforcement, in the form of cables or rods, is tensioned, either before the concrete has set or after the concrete has set but before it starts to carry its eventual live loads. The concrete is forced into compression by the stretched steel. […] Such prestressed concrete gives amazing possibilities for very slender and daring structures […] the concrete must be able to withstand the tension in the steel, whether or not the full working loads are being applied. For an arch bridge made from prestressed concrete, the prestress from the steel cables tries to lift up the concrete and reduce the span whereas the traffic loads on the bridge are trying to push it down and increase the span. The location and amount of the prestress has to be chosen to provide the optimum use of the available strength under all possible load combinations. The pressure vessels used to contain the central reactor of a nuclear power station provide a typical example of the application of prestressed concrete.”

“There are many civil engineering contributions required in the several elements of [a] power station […]. The electricity generation side of a nuclear power station is subject to exactly the same design constraints as any other power station. Pipework leading the steam and water through the plant has to be able to cope with severe temperature variations, rotating machinery requires foundations which not only have to be precisely aligned but also have to be able to tolerate the high frequency vibrations arising from the rotations. Residual small out-of-balance forces, transmitted to the foundation continuously over long periods, could degrade the stiffness of the ground. Every system has its resonant frequency at which applied cyclic loads will tend to be amplified, possibly uncontrollably, unless prevented by the damping properties of the foundation materials. Even if the rotating machinery is being operated well away from any resonant frequency under normal conditions, there will be start-up periods in which the frequency sweeps up from stationery, zero frequency, and so an undesirable resonance may be triggered on the way”.

“The material which we see so often on modern road surfaces, […] asphalt […], was introduced in the early 20th century. Binding together the surface layers of stones with bitumen or tar gave the running surface a better strength. Tar is a viscous material which deforms with time under load; ruts may form, particularly in hot weather. Special treatments can be used for the asphalt to reduce the surface noise made by tyres; porous asphalt can encourage drainage. On the other hand, a running surface that is more resistant to traffic loading can be provided with a concrete slab reinforced with a crisscross steel mesh to maintain its integrity between deliberately inserted construction joints, so that any cracking resulting from seasonal thermal contraction occurs at locations chosen by the engineer rather than randomly across the concrete slab. The initial costs of concrete road surfaces are higher than the asphalt alternatives but the full-life costs may be lower.”

“A good supply of fresh water is one essential element of civilized infrastructure; some control of the waste water from houses and industries is another. The two are, of course, not completely independent since one of the desirable requirements of a source of fresh water is that it should not have been contaminated with waste before it reaches its destination of consumption: hence the preference for long aqueducts or pipelines starting from natural springs, rather than taking water from rivers which were probably already contaminated by upstream conurbations. It is curious how often in history this lesson has had to be relearnt.”

“The object of controlled disposal is the same as for nuclear waste: to contain it and prevent any of the toxic constituents from finding their way into the food chain or into water supplies. Simply to remove everything that could possibly be contaminated and dump it to landfill seems the easy option, particularly if use can be made of abandoned quarries or other holes in the ground. But the quantities involved make this an unsustainable long-term proposition. Cities become surrounded with artificial hills of uncertain composition which are challenging to develop for industrial or residential purposes because decomposing waste often releases gases which may be combustible (and useful) or poisonous; because waste often contains toxic substances which have to be prevented from finding pathways to man either upwards to the air or sideways towards water supplies; because the properties of waste (whether or not decomposed or decomposing) are not easy to determine and probably not particularly desirable from an engineering point of view; and because developers much prefer greenfield sites to sites of uncertainty and contamination.”

“There are regularly more or less serious floods in different parts of the world. Some of these are simply the result of unusually high quantities of rainfall which overload the natural river channels, often exacerbated by changes in land use (such as the felling of areas of forest) which encourage more rapid runoff or impose a man-made canalization of the river (by building on flood plains into which the rising river would previously have been able to spill) […]. Some of the incidents are the result of unusual encroachments by the sea, a consequence of a combination of high tide and adverse wind and weather conditions. The potential for disastrous consequences is of course enhanced when both on-shore and off-shore circumstances combine. […] Folk memory for natural disasters tends to be quite short. If the interval between events is typically greater than, say, 5–10 years people may assume that such events are extraordinary and rare. They may suppose that building on the recently flooded plains will be safe for the foreseeable future.”

Links:

Civil engineering.
École Nationale des Ponts et Chaussées.
Institution of Civil Engineers.
Christopher Wren. John Smeaton. Thomas Telford. William Rankine.
Leaning Tower of Pisa.
Cruck. Trabeated system. Corbel. Voussoir. Flange. I-beam.
Hardwick Hall. Blackfriars Bridge. Forth Bridge. Sydney Harbour Bridge.
Gothic architecture.
Buckling.
Pozzolana. Concrete. Grout.
Gravity dam. Arch dam. Hoover Dam. Malpasset Dam.
Torness Nuclear Power Station.
Plastic. Carbon fiber reinforced polymer.
Roman roads. Via Appia.
Sanitation.
Aqueduct. Pont du Gard.
Charles Yelverton O’Connor. Goldfields Water Supply Scheme.
1854 Broad Street cholera outbreak. John Snow. Great Stink of 1858. Joseph Bazalgette.
Brent Spar.
Clywedog Reservoir.
Acqua alta.
North Sea flood of 1953. Hurricane Katrina.
Delta Works. Oosterscheldekering. Thames Barrier.
Groyne. Breakwater.

December 20, 2017 Posted by | Books, Economics, Engineering, Geology | Leave a comment

Nuclear Power (II)

This is my second and last post about the book. Some more links and quotes below.

“Many of the currently operating reactors were built in the late 1960s and 1970s. With a global hiatus on nuclear reactor construction following the Three Mile Island incident and the Chernobyl disaster, there is a dearth of nuclear power replacement capacity as the present fleet faces decommissioning. Nuclear power stations, like coal-, gas-, and oil-fired stations, produce heat to generate electricity and all require water for cooling. The US Geological Survey estimates that this use of water for cooling power stations accounts for over 3% of all water consumption. Most nuclear power plants are built close to the sea so that the ocean can be used as a heat dump. […] The need for such large quantities of water inhibits the use of nuclear power in arid regions of the world. […] The higher the operating temperature, the greater the water usage. […] [L]arge coal, gas and nuclear plants […] can consume millions of litres per hour”.

“A nuclear reactor is utilizing the strength of the force between nucleons while hydrocarbon burning is relying on the chemical bonding between molecules. Since the nuclear bonding is of the order of a million times stronger than the chemical bonding, the mass of hydrocarbon fuel necessary to produce a given amount of energy is about a million times greater than the equivalent mass of nuclear fuel. Thus, while a coal station might burn millions of tonnes of coal per year, a nuclear station with the same power output might consume a few tonnes.”

“There are a number of reasons why one might wish to reprocess the spent nuclear fuel. These include: to produce plutonium either for nuclear weapons or, increasingly, as a fuel-component for fast reactors; the recycling of all actinides for fast-breeder reactors, closing the nuclear fuel cycle, greatly increasing the energy extracted from natural uranium; the recycling of plutonium in order to produce mixed oxide fuels for thermal reactors; recovering enriched uranium from spent fuel to be recycled through thermal reactors; to extract expensive isotopes which are of value to medicine, agriculture, and industry. An integral part of this process is the management of the radioactive waste. Currently 40% of all nuclear fuel is obtained by reprocessing. […] The La Hague site is the largest reprocessing site in the world, with over half the global capacity at 1,700 tonnes of spent fuel per year. […] The world’s largest user of nuclear power, the USA, currently does not reprocess its fuel and hence produces [large] quantities of radioactive waste. […] The principal reprocessors of radioactive waste are France and the UK. Both countries receive material from other countries and after reprocessing return the raffinate to the country of origin for final disposition.”

“Nearly 45,000 tonnes of uranium are mined annually. More than half comes from the three largest producers, Canada, Kazakhstan, and Australia.”

“The designs of nuclear installations are required to be passed by national nuclear licensing agencies. These include strict safety and security features. The international standard for the integrity of a nuclear power plant is that it would withstand the crash of a Boeing 747 Jumbo Jet without the release of hazardous radiation beyond the site boundary. […] At Fukushima, the design was to current safety standards, taking into account the possibility of a severe earthquake; what had not been allowed for was the simultaneous tsunami strike.”

“The costing of nuclear power is notoriously controversial. Opponents point to the past large investments made in nuclear research and would like to factor this into the cost. There are always arguments about whether or not decommissioning costs and waste-management costs have been properly accounted for. […] which electricity source is most economical will vary from country to country […]. As with all industrial processes, there can be economies of scale. In the USA, and particularly in the UK, these economies of scale were never fully realized. In the UK, while several Magnox and AGR reactors were built, no two were of exactly the same design, resulting in no economies in construction costs, component manufacture, or staff training programmes. The issue is compounded by the high cost of licensing new designs. […] in France, the Regulatory Commission agreed a standard design for all plants and used a safety engineering process similar to that used for licensing aircraft. Public debate was thereafter restricted to local site issues. Economies of scale were achieved.”

“[C]onstruction costs […] are the largest single factor in the cost of nuclear electricity generation. […] Because the raw fuel is such a small fraction of the cost of nuclear power generation, the cost of electricity is not very sensitive to the cost of uranium, unlike the fossil fuels, for which fuel can represent up to 70% of the cost. Operating costs for nuclear plants have fallen dramatically as the French practice of standardization of design has spread. […] Generation III+ reactors are claimed to be half the size and capable of being built in much shorter times than the traditional PWRs. The 2008 contracted capital cost of building new plants containing two AP1000 reactors in the USA is around $10–$14billion, […] There is considerable experience of decommissioning of nuclear plants. In the USA, the cost of decommissioning a power plant is approximately $350 million. […] In France and Sweden, decommissioning costs are estimated to be 10–15% of construction costs and are included in the price charged for electricity. […] The UK has by far the highest estimates for decommissioning which are set at £1 billion per reactor. This exceptionally high figure is in part due to the much larger reactor core associated with graphite moderated piles. […] It is clear that in many countries nuclear-generated electricity is commercially competitive with fossil fuels despite the need to include the cost of capital and all waste disposal and decommissioning (factors that are not normally included for other fuels). […] At the present time, without the market of taxes and grants, electricity generated from renewable sources is generally more expensive than that from nuclear power or fossil fuels. This leaves the question: if nuclear power is so competitive, why is there not a global rush to build new nuclear power stations? The answer lies in the time taken to recoup investments. Investors in a new gas-fired power station can expect to recover their investment within 15 years. Because of the high capital start-up costs, nuclear power stations yield a slower rate of return, even though over the lifetime of the plant the return may be greater.”

“Throughout the 20th century, the population and GDP growth combined to drive the [global] demand for energy to increase at a rate of 4% per annum […]. The most conservative estimate is that the demand for energy will see global energy requirements double between 2000 and 2050. […] The demand for electricity is growing at twice the rate of the demand for energy. […] More than two-thirds of all electricity is generated by burning fossil fuels. […] The most rapidly growing renewable source of electricity generation is wind power […] wind is an intermittent source of electricity. […] The intermittency of wind power leads to [a] problem. The grid management has to supply a steady flow of electricity. Intermittency requires a heavy overhead on grid management, and there are serious concerns about the ability of national grids to cope with more than a 20% contribution from wind power. […] As for the other renewables, solar and geothermal power, significant electricity generation will be restricted to latitudes 40°S to 40°N and regions of suitable geological structures, respectively. Solar power and geothermal power are expected to increase but will remain a small fraction of the total electricity supply. […] In most industrialized nations, the current electricity supply is via a regional, national, or international grid. The electricity is generated in large (~1GW) power stations. This is a highly efficient means of electricity generation and distribution. If the renewable sources of electricity generation are to become significant, then a major restructuring of the distribution infrastructure will be necessary. While local ‘microgeneration’ can have significant benefits for small communities, it is not practical for the large-scale needs of big industrial cities in which most of the world’s population live.”

“Electricity cannot be stored in large quantities. If the installed generating capacity is designed to meet peak demand, there will be periods when the full capacity is not required. In most industrial countries, the average demand is only about one-third of peak consumption.”

Links:

Nuclear reprocessing. La Hague site. Radioactive waste. Yucca Mountain nuclear waste repository.
Bismuth phosphate process.
Nuclear decommissioning.
Uranium mining. Open-pit mining.
Wigner effect (Wigner heating). Windscale fire. Three Mile Island accident. Chernobyl disaster. Fukushima Daiichi nuclear disaster.
Fail-safe (engineering).
Treaty on the Non-Proliferation of Nuclear Weapons.
Economics of nuclear power plants.
Fusion power. Tokamak. ITER. High Power laser Energy Research facility (HiPER).
Properties of plasma.
Klystron.
World energy consumption by fuel source. Renewable energy.

 

December 16, 2017 Posted by | Books, Chemistry, Economics, Engineering, Physics | Leave a comment

Nuclear power (I)

I originally gave the book 2 stars, but after I had finished this post I changed that rating to 3 stars (which was not that surprising; already when I wrote my goodreads review shortly after having read the book I was conflicted about whether or not the book deserved the third star). One thing that kept me from giving the book a higher rating was that I thought that the author did not spend enough time on ‘the basic concepts’, a problem I also highlighted in my goodreads review. I’d fortunately recently covered some of those concepts in other books in the series, so it wasn’t too hard for me to follow what was going on, but as sometimes happens for authors of books in this series, I think the author simply was trying to cover too much stuff. But even so this is a nice introductory text on this topic.

I have added some links and quotes related to the first half or so of the book below. I prepared the link list before I started gathering quotes for my coverage, so there may be more overlap in terms of which topics are covered both in the quotes and the links than there usually is (I normally tend to reserve the links for topics and concepts which are covered in these books that I don’t find it necessary to cover in detail in the text – the links are meant to remind me/indicate which sort of topics are also covered in the book, aside from the topics included in the text coverage).

“According to Einstein’s mass–energy equation, the mass of any composite stable object has to be less than the sum of the masses of the parts; the difference is the binding energy of the object. […] The general features of the binding energies are simply understood as follows. We have seen that the measured radii of nuclei [increase] with the cube root of the mass number A. This is consistent with a structure of close packed nucleons. If each nucleon could only interact with its closest neighbours, the total binding energy would then itself be proportional to the number of nucleons. However, this would be an overestimate because nucleons at the surface of the nucleus would not have a complete set of nearest neighbours with which to interact […]. The binding energy would be reduced by the number of surface nucleons and this would be proportional to the surface area, itself proportional to A2/3. So far we have considered only the attractive short-range nuclear binding. However, the protons carry an electric charge and hence experience an electrical repulsion between each other. The electrical force between two protons is much weaker than the nuclear force at short distances but dominates at larger distances. Furthermore, the total electrical contribution increases with the number of pairs of protons.”

“The main characteristics of the empirical binding energy of nuclei […] can now be explained. For the very light nuclei, all the nucleons are in the surface, the electrical repulsion is negligible, and the binding energy increases as the volume and number of nucleons increases. Next, the surface effects start to slow the rate of growth of the binding energy yielding a region of most stable nuclei near charge number Z = 28 (iron). Finally, the electrical repulsion steadily increases until we reach the most massive stable nucleus (lead-208). Between iron and lead, not only does the binding energy decrease so also do the proton to neutron ratios since the neutrons do not experience the electrical repulsion. […] as the nuclei get heavier the Coulomb repulsion term requires an increasing number of neutrons for stability […] For an explanation of [the] peaks, we must turn to the quantum nature of the problem. […] Filled shells corresponded to particularly stable electronic structures […] In the nuclear case, a shell structure also exists separately for both the neutrons and the protons. […] Closed-shell nuclei are referred to as ‘magic number’ nuclei. […] there is a particular stability for nuclei with equal numbers of protons and neutrons.”

“As we move off the line of stable nuclei, by adding or subtracting neutrons, the isotopes become increasingly less stable indicated by increasing levels of beta radioactivity. Nuclei with a surfeit of neutrons emit an electron, hence converting one of the neutrons into a proton, while isotopes with a neutron deficiency can emit a positron with the conversion of a proton into a neutron. For the heavier nuclei, the neutron to proton ratio can be reduced by emitting an alpha particle. All nuclei heavier than lead are unstable and hence radioactive alpha emitters. […] The fact that almost all the radioactive isotopes heavier than lead follow [a] kind of decay chain and end up as stable isotopes of lead explains this element’s anomalously high natural abundance.”

“When two particles collide, they transfer energy and momentum between themselves. […] If the target is much lighter than the projectile, the projectile sweeps it aside with little loss of energy and momentum. If the target is much heavier than the projectile, the projectile simply bounces off the target with little loss of energy. The maximum transfer of energy occurs when the target and the projectile have the same mass. In trying to slow down the neutrons, we need to pass them through a moderator containing scattering centres of a similar mass. The obvious candidate is hydrogen, in which the single proton of the nucleus is the particle closest in mass to the neutron. At first glance, it would appear that water, with its low cost and high hydrogen content, would be the ideal moderator. There is a problem, however. Slow neutrons can combine with protons to form an isotope of hydrogen, deuterium. This removes neutrons from the chain reaction. To overcome this, the uranium fuel has to be enriched by increasing the proportion of uranium-235; this is expensive and technically difficult. An alternative is to use heavy water, that is, water in which the hydrogen is replaced by deuterium. It is not quite as effective as a moderator but it does not absorb neutrons. Heavy water is more expensive and its production more technically demanding than natural water. Finally, graphite (carbon) has a mass of 12 and hence is less efficient requiring a larger reactor core, but it is inexpensive and easily available.”

“[During the Manhattan Project,] Oak Ridge, Tennessee, was chosen as the facility to develop techniques for uranium enrichment (increasing the relative abundance of uranium-235) […] a giant gaseous diffusion facility was developed. Gaseous uranium hexafluoride was forced through a semi permeable membrane. The lighter isotopes passed through faster and at each pass through the membrane the uranium hexafluoride became more and more enriched. The technology is very energy consuming […]. At its peak, Oak Ridge consumed more electricity than New York and Washington DC combined. Almost one-third of all enriched uranium is still produced by this now obsolete technology. The bulk of enriched uranium today is produced in high-speed centrifuges which require much less energy.”

“In order to sustain a nuclear chain reaction, it is essential to have a critical mass of fissile material. This mass depends upon the fissile fuel being used and the topology of the structure containing it. […] The chain reaction is maintained by the neutrons and many of these leave the surface without contributing to the reaction chain. Surrounding the fissile material with a blanket of neutron reflecting material, such as beryllium metal, will keep the neutrons in play and reduce the critical mass. Partially enriched uranium will have an increased critical mass and natural uranium (0.7% uranium-235) will not go critical at any mass without a moderator to increase the number of slow neutrons which are the dominant fission triggers. The critical mass can also be decreased by compressing the fissile material.”

“It is now more than 50 years since operations of the first civil nuclear reactor began. In the intervening years, several hundred reactors have been operating, in total amounting to nearly 50 million hours of experience. This cumulative experience has led to significant advances in reactor design. Different reactor types are defined by their choice of fuel, moderator, control rods, and coolant systems. The major advances leading to greater efficiency, increased economy, and improved safety are referred to as ‘generations’. […] [F]irst generation reactors […] had the dual purpose to make electricity for public consumption and plutonium for the Cold War stockpiles of nuclear weapons. Many of the features of the design were incorporated to meet the need for plutonium production. These impacted on the electricity-generating cost and efficiency. The most important of these was the use of unenriched uranium due to the lack of large-scale enrichment plants in the UK, and the high uranium-238 content was helpful in the plutonium production but made the electricity generation less efficient.”

PWRs, BWRs, and VVERs are known as LWRs (Light Water Reactors). LWRs dominate the world’s nuclear power programme, with the USA operating 69 PWRs and 35 BWRs; Japan operates 63 LWRs, the bulk of which are BWRs; and France has 59 PWRs. Between them, these three countries generate 56% of the world’s nuclear power. […] In the 1990s, a series of advanced versions of the Generation II and III reactors began to receive certification. These included the ACR (Advanced CANDU Reactor), the EPR (European Pressurized Reactor), and Westinghouse AP1000 and APR1400 reactors (all developments of the PWR) and ESBWR (a development of the BWR). […] The ACR uses slightly enriched uranium and a light water coolant, allowing the core to be halved in size for the same power output. […] It would appear that two of the Generation III+ reactors, the EPR […] and AP1000, are set to dominate the world market for the next 20 years. […] […] the EPR is considerably safer than current reactor designs. […] A major advance is that the generation 3+ reactors produce only about 10 % of waste compared with earlier versions of LWRs. […] China has officially adopted the AP1000 design as a standard for future nuclear plants and has indicated a wish to see 100 nuclear plants under construction or in operation by 2020.”

“All thermal electricity-generating systems are examples of heat engines. A heat engine takes energy from a high-temperature environment to a low-temperature environment and in the process converts some of the energy into mechanical work. […] In general, the efficiency of the thermal cycle increases as the temperature difference between the low-temperature environment and the high-temperature environment increases. In PWRs, and nearly all thermal electricity-generating plants, the efficiency of the thermal cycle is 30–35%. At the much higher operating temperatures of Generation IV reactors, typically 850–10000C, it is hoped to increase this to 45–50%.
During the operation of a thermal nuclear reactor, there can be a build-up of fission products known as reactor poisons. These are materials with a large capacity to absorb neutrons and this can slow down the chain reaction; in extremes, it can lead to a complete close-down. Two important poisons are xenon-135 and samarium-149. […] During steady state operation, […] xenon builds up to an equilibrium level in 40–50 hours when a balance is reached between […] production […] and the burn-up of xenon by neutron capture. If the power of the reactor is increased, the amount of xenon increases to a higher equilibrium and the process is reversed if the power is reduced. If the reactor is shut down the burn-up of xenon ceases, but the build-up of xenon continues from the decay of iodine. Restarting the reactor is impeded by the higher level of xenon poisoning. Hence it is desirable to keep reactors running at full capacity as long as possible and to have the capacity to reload fuel while the reactor is on line. […] Nuclear plants operate at highest efficiency when operated continually close to maximum generating capacity. They are thus ideal for provision of base load. If their output is significantly reduced, then the build-up of reactor poisons can impact on their efficiency.”

Links:

Radioactivity. Alpha decay. Beta decay. Gamma decay. Free neutron decay.
Periodic table.
Rutherford scattering.
Isotope.
Neutrino. Positron. Antineutrino.
Binding energy.
Mass–energy equivalence.
Electron shell.
Decay chain.
Heisenberg uncertainty principle.
Otto Hahn. Lise Meitner. Fritz Strassman. Enrico Fermi. Leo Szilárd. Otto Frisch. Rudolf Peierls.
Uranium 238. Uranium 235. Plutonium.
Nuclear fission.
Chicago Pile 1.
Manhattan Project.
Uranium hexafluoride.
Heavy water.
Nuclear reactor coolant. Control rod.
Critical mass. Nuclear chain reaction.
Magnox reactor. UNGG reactor. CANDU reactor.
ZEEP.
Nuclear reactor classifications (a lot of the distinctions included in this article are also included in the book and described in some detail. The topics included here are also covered extensively).
USS Nautilus.
Nuclear fuel cycle.
Thorium-based nuclear power.
Heat engine. Thermodynamic cycle. Thermal efficiency.
Reactor poisoning. Xenon 135. Samarium 149.
Base load.

December 7, 2017 Posted by | Books, Chemistry, Engineering, Physics | Leave a comment

Radioactivity

A few quotes from the book and some related links below. Here’s my very short goodreads review of the book.

Quotes:

“The main naturally occurring radionuclides of primordial origin are uranium-235, uranium-238, thorium-232, their decay products, and potassium-40. The average abundance of uranium, thorium, and potassium in the terrestrial crust is 2.6 parts per million, 10 parts per million, and 1% respectively. Uranium and thorium produce other radionuclides via neutron- and alpha-induced reactions, particularly deeply underground, where uranium and thorium have a high concentration. […] A weak source of natural radioactivity derives from nuclear reactions of primary and secondary cosmic rays with the atmosphere and the lithosphere, respectively. […] Accretion of extraterrestrial material, intensively exposed to cosmic rays in space, represents a minute contribution to the total inventory of radionuclides in the terrestrial environment. […] Natural radioactivity is [thus] mainly produced by uranium, thorium, and potassium. The total heat content of the Earth, which derives from this radioactivity, is 12.6 × 1024 MJ (one megajoule = 1 million joules), with the crust’s heat content standing at 5.4 × 1021 MJ. For comparison, this is significantly more than the 6.4 × 1013 MJ globally consumed for electricity generation during 2011. This energy is dissipated, either gradually or abruptly, towards the external layers of the planet, but only a small fraction can be utilized. The amount of energy available depends on the Earth’s geological dynamics, which regulates the transfer of heat to the surface of our planet. The total power dissipated by the Earth is 42 TW (one TW = 1 trillion watts): 8 TW from the crust, 32.3 TW from the mantle, 1.7 TW from the core. This amount of power is small compared to the 174,000 TW arriving to the Earth from the Sun.”

“Charged particles such as protons, beta and alpha particles, or heavier ions that bombard human tissue dissipate their energy locally, interacting with the atoms via the electromagnetic force. This interaction ejects electrons from the atoms, creating a track of electron–ion pairs, or ionization track. The energy that ions lose per unit path, as they move through matter, increases with the square of their charge and decreases linearly with their energy […] The energy deposited in the tissues and organs of your body by ionizing radiation is defined absorbed dose and is measured in gray. The dose of one gray corresponds to the energy of one joule deposited in one kilogram of tissue. The biological damage wrought by a given amount of energy deposited depends on the kind of ionizing radiation involved. The equivalent dose, measured in sievert, is the product of the dose and a factor w related to the effective damage induced into the living matter by the deposit of energy by specific rays or particles. For X-rays, gamma rays, and beta particles, a gray corresponds to a sievert; for neutrons, a dose of one gray corresponds to an equivalent dose of 5 to 20 sievert, and the factor w is equal to 5–20 (depending on the neutron energy). For protons and alpha particles, w is equal to 5 and 20, respectively. There is also another weighting factor taking into account the radiosensitivity of different organs and tissues of the body, to evaluate the so-called effective dose. Sometimes the dose is still quoted in rem, the old unit, with 100 rem corresponding to one sievert.”

“Neutrons emitted during fission reactions have a relatively high velocity. When still in Rome, Fermi had discovered that fast neutrons needed to be slowed down to increase the probability of their reaction with uranium. The fission reaction occurs with uranium-235. Uranium-238, the most common isotope of the element, merely absorbs the slow neutrons. Neutrons slow down when they are scattered by nuclei with a similar mass. The process is analogous to the interaction between two billiard balls in a head-on collision, in which the incoming ball stops and transfers all its kinetic energy to the second one. ‘Moderators’, such as graphite and water, can be used to slow neutrons down. […] When Fermi calculated whether a chain reaction could be sustained in a homogeneous mixture of uranium and graphite, he got a negative answer. That was because most neutrons produced by the fission of uranium-235 were absorbed by uranium-238 before inducing further fissions. The right approach, as suggested by Szilárd, was to use separated blocks of uranium and graphite. Fast neutrons produced by the splitting of uranium-235 in the uranium block would slow down, in the graphite block, and then produce fission again in the next uranium block. […] A minimum mass – the critical mass – is required to sustain the chain reaction; furthermore, the material must have a certain geometry. The fissile nuclides, capable of sustaining a chain reaction of nuclear fission with low-energy neutrons, are uranium-235 […], uranium-233, and plutonium-239. The last two don’t occur in nature but can be produced artificially by irradiating with neutrons thorium-232 and uranium-238, respectively – via a reaction called neutron capture. Uranium-238 (99.27%) is fissionable, but not fissile. In a nuclear weapon, the chain reaction occurs very rapidly, releasing the energy in a burst.”

“The basic components of nuclear power reactors, fuel, moderator, and control rods, are the same as in the first system built by Fermi, but the design of today’s reactors includes additional components such as a pressure vessel, containing the reactor core and the moderator, a containment vessel, and redundant and diverse safety systems. Recent technological advances in material developments, electronics, and information technology have further improved their reliability and performance. […] The moderator to slow down fast neutrons is sometimes still the graphite used by Fermi, but water, including ‘heavy water’ – in which the water molecule has a deuterium atom instead of a hydrogen atom – is more widely used. Control rods contain a neutron-absorbing material, such as boron or a combination of indium, silver, and cadmium. To remove the heat generated in the reactor core, a coolant – either a liquid or a gas – is circulating through the reactor core, transferring the heat to a heat exchanger or directly to a turbine. Water can be used as both coolant and moderator. In the case of boiling water reactors (BWRs), the steam is produced in the pressure vessel. In the case of pressurized water reactors (PWRs), the steam generator, which is the secondary side of the heat exchanger, uses the heat produced by the nuclear reactor to make steam for the turbines. The containment vessel is a one-metre-thick concrete and steel structure that shields the reactor.”

“Nuclear energy contributed 2,518 TWh of the world’s electricity in 2011, about 14% of the global supply. As of February 2012, there are 435 nuclear power plants operating in 31 countries worldwide, corresponding to a total installed capacity of 368,267 MW (electrical). There are 63 power plants under construction in 13 countries, with a capacity of 61,032 MW (electrical).”

“Since the first nuclear fusion, more than 60 years ago, many have argued that we need at least 30 years to develop a working fusion reactor, and this figure has stayed the same throughout those years.”

“[I]onizing radiation is […] used to improve many properties of food and other agricultural products. For example, gamma rays and electron beams are used to sterilize seeds, flour, and spices. They can also inhibit sprouting and destroy pathogenic bacteria in meat and fish, increasing the shelf life of food. […] More than 60 countries allow the irradiation of more than 50 kinds of foodstuffs, with 500,000 tons of food irradiated every year. About 200 cobalt-60 sources and more than 10 electron accelerators are dedicated to food irradiation worldwide. […] With the help of radiation, breeders can increase genetic diversity to make the selection process faster. The spontaneous mutation rate (number of mutations per gene, for each generation) is in the range 10-8–10-5. Radiation can increase this mutation rate to 10-5–10-2. […] Long-lived cosmogenic radionuclides provide unique methods to evaluate the ‘age’ of groundwaters, defined as the mean subsurface residence time after the isolation of the water from the atmosphere. […] Scientists can date groundwater more than a million years old, through chlorine-36, produced in the atmosphere by cosmic-ray reactions with argon.”

“Radionuclide imaging was developed in the 1950s using special systems to detect the emitted gamma rays. The gamma-ray detectors, called gamma cameras, use flat crystal planes, coupled to photomultiplier tubes, which send the digitized signals to a computer for image reconstruction. Images show the distribution of the radioactive tracer in the organs and tissues of interest. This method is based on the introduction of low-level radioactive chemicals into the body. […] More than 100 diagnostic tests based on radiopharmaceuticals are used to examine bones and organs such as lungs, intestines, thyroids, kidneys, the liver, and gallbladder. They exploit the fact that our organs preferentially absorb different chemical compounds. […] Many radiopharmaceuticals are based on technetium-99m (an excited state of technetium-99 – the ‘m’ stands for ‘metastable’ […]). This radionuclide is used for the imaging and functional examination of the heart, brain, thyroid, liver, and other organs. Technetium-99m is extracted from molybdenum-99, which has a much longer half-life and is therefore more transportable. It is used in 80% of the procedures, amounting to about 40,000 per day, carried out in nuclear medicine. Other radiopharmaceuticals include short-lived gamma-emitters such as cobalt-57, cobalt-58, gallium-67, indium-111, iodine-123, and thallium-201. […] Methods routinely used in medicine, such as X-ray radiography and CAT, are increasingly used in industrial applications, particularly in non-destructive testing of containers, pipes, and walls, to locate defects in welds and other critical parts of the structure.”

“Today, cancer treatment with radiation is generally based on the use of external radiation beams that can target the tumour in the body. Cancer cells are particularly sensitive to damage by ionizing radiation and their growth can be controlled or, in some cases, stopped. High-energy X-rays produced by a linear accelerator […] are used in most cancer therapy centres, replacing the gamma rays produced from cobalt-60. The LINAC produces photons of variable energy bombarding a target with a beam of electrons accelerated by microwaves. The beam of photons can be modified to conform to the shape of the tumour, which is irradiated from different angles. The main problem with X-rays and gamma rays is that the dose they deposit in the human tissue decreases exponentially with depth. A considerable fraction of the dose is delivered to the surrounding tissues before the radiation hits the tumour, increasing the risk of secondary tumours. Hence, deep-seated tumours must be bombarded from many directions to receive the right dose, while minimizing the unwanted dose to the healthy tissues. […] The problem of delivering the needed dose to a deep tumour with high precision can be solved using collimated beams of high-energy ions, such as protons and carbon. […] Contrary to X-rays and gamma rays, all ions of a given energy have a certain range, delivering most of the dose after they have slowed down, just before stopping. The ion energy can be tuned to deliver most of the dose to the tumour, minimizing the impact on healthy tissues. The ion beam, which does not broaden during the penetration, can follow the shape of the tumour with millimetre precision. Ions with higher atomic number, such as carbon, have a stronger biological effect on the tumour cells, so the dose can be reduced. Ion therapy facilities are [however] still very expensive – in the range of hundreds of millions of pounds – and difficult to operate.”

“About 50 million years ago, a global cooling trend took our planet from the tropical conditions at the beginning of the Tertiary to the ice ages of the Quaternary, when the Arctic ice cap developed. The temperature decrease was accompanied by a decrease in atmospheric CO2 from 2,000 to 300 parts per million. The cooling was probably caused by a reduced greenhouse effect and also by changes in ocean circulation due to plate tectonics. The drop in temperature was not constant as there were some brief periods of sudden warming. Ocean deep-water temperatures dropped from 12°C, 50 million years ago, to 6°C, 30 million years ago, according to archives in deep-sea sediments (today, deep-sea waters are about 2°C). […] During the last 2 million years, the mean duration of the glacial periods was about 26,000 years, while that of the warm periods – interglacials – was about 27,000 years. Between 2.6 and 1.1 million years ago, a full cycle of glacial advance and retreat lasted about 41,000 years. During the past 1.2 million years, this cycle has lasted 100,000 years. Stable and radioactive isotopes play a crucial role in the reconstruction of the climatic history of our planet”.

Links:

CUORE (Cryogenic Underground Observatory for Rare Events).
Borexino.
Lawrence Livermore National Laboratory.
Marie Curie. Pierre Curie. Henri Becquerel. Wilhelm Röntgen. Joseph Thomson. Ernest Rutherford. Hans Geiger. Ernest Marsden. Niels Bohr.
Ruhmkorff coil.
Electroscope.
Pitchblende (uraninite).
Mache.
Polonium. Becquerel.
Radium.
Alpha decay. Beta decay. Gamma radiation.
Plum pudding model.
Spinthariscope.
Robert Boyle. John Dalton. Dmitri Mendeleev. Frederick Soddy. James Chadwick. Enrico Fermi. Lise Meitner. Otto Frisch.
Periodic Table.
Exponential decay. Decay chain.
Positron.
Particle accelerator. Cockcroft-Walton generator. Van de Graaff generator.
Barn (unit).
Nuclear fission.
Manhattan Project.
Chernobyl disaster. Fukushima Daiichi nuclear disaster.
Electron volt.
Thermoluminescent dosimeter.
Silicon diode detector.
Enhanced geothermal system.
Chicago Pile Number 1. Experimental Breeder Reactor 1. Obninsk Nuclear Power Plant.
Natural nuclear fission reactor.
Gas-cooled reactor.
Generation I reactors. Generation II reactor. Generation III reactor. Generation IV reactor.
Nuclear fuel cycle.
Accelerator-driven subcritical reactor.
Thorium-based nuclear power.
Small, sealed, transportable, autonomous reactor.
Fusion power. P-p (proton-proton) chain reaction. CNO cycle. Tokamak. ITER (International Thermonuclear Experimental Reactor).
Sterile insect technique.
Phase-contrast X-ray imaging. Computed tomography (CT). SPECT (Single-photon emission computed tomography). PET (positron emission tomography).
Boron neutron capture therapy.
Radiocarbon dating. Bomb pulse.
Radioactive tracer.
Radithor. The Radiendocrinator.
Radioisotope heater unit. Radioisotope thermoelectric generator. Seebeck effect.
Accelerator mass spectrometry.
Atomic bombings of Hiroshima and Nagasaki. Treaty on the Non-Proliferation of Nuclear Weapons. IAEA.
Nuclear terrorism.
Swiss light source. Synchrotron.
Chronology of the universe. Stellar evolution. S-process. R-process. Red giant. Supernova. White dwarf.
Victor Hess. Domenico Pacini. Cosmic ray.
Allende meteorite.
Age of the Earth. History of Earth. Geomagnetic reversal. Uranium-lead dating. Clair Cameron Patterson.
Glacials and interglacials.
Taung child. Lucy. Ardi. Ardipithecus kadabba. Acheulean tools. Java Man. Ötzi.
Argon-argon dating. Fission track dating.

November 28, 2017 Posted by | Archaeology, Astronomy, Biology, Books, Cancer/oncology, Chemistry, Engineering, Geology, History, Medicine, Physics | Leave a comment

Materials… (II)

Some more quotes and links:

“Whether materials are stiff and strong, or hard or weak, is the territory of mechanics. […] the 19th century continuum theory of linear elasticity is still the basis of much of modern solid mechanics. A stiff material is one which does not deform much when a force acts on it. Stiffness is quite distinct from strength. A material may be stiff but weak, like a piece of dry spaghetti. If you pull it, it stretches only slightly […], but as you ramp up the force it soon breaks. To put this on a more scientific footing, so that we can compare different materials, we might devise a test in which we apply a force to stretch a bar of material and measure the increase in length. The fractional change in length is the strain; and the applied force divided by the cross-sectional area of the bar is the stress. To check that it is Hookean, we double the force and confirm that the strain has also doubled. To check that it is truly elastic, we remove the force and check that the bar returns to the same length that it started with. […] then we calculate the ratio of the stress to the strain. This ratio is the Young’s modulus of the material, a quantity which measures its stiffness. […] While we are measuring the change in length of the bar, we might also see if there is a change in its width. It is not unreasonable to think that as the bar stretches it also becomes narrower. The Poisson’s ratio of the material is defined as the ratio of the transverse strain to the longitudinal strain (without the minus sign).”

“There was much argument between Cauchy and Lamé and others about whether there are two stiffness moduli or one. […] In fact, there are two stiffness moduli. One describes the resistance of a material to shearing and the other to compression. The shear modulus is the stiffness in distortion, for example in twisting. It captures the resistance of a material to changes of shape, with no accompanying change of volume. The compression modulus (usually called the bulk modulus) expresses the resistance to changes of volume (but not shape). This is what occurs as a cube of material is lowered deep into the sea, and is squeezed on all faces by the water pressure. The Young’s modulus [is] a combination of the more fundamental shear and bulk moduli, since stretching in one direction produces changes in both shape and volume. […] A factor of about 10,000 covers the useful range of Young’s modulus in engineering materials. The stiffness can be traced back to the forces acting between atoms and molecules in the solid state […]. Materials like diamond or tungsten with strong bonds are stiff in the bulk, while polymer materials with weak intermolecular forces have low stiffness.”

“In pure compression, the concept of ‘strength’ has no meaning, since the material cannot fail or rupture. But materials can and do fail in tension or in shear. To judge how strong a material is we can go back for example to the simple tension arrangement we used for measuring stiffness, but this time make it into a torture test in which the specimen is put on the rack. […] We find […] that we reach a strain at which the material stops being elastic and is permanently stretched. We have reached the yield point, and beyond this we have damaged the material but it has not failed. After further yielding, the bar may fail by fracture […]. On the other hand, with a bar of cast iron, there comes a point where the bar breaks, noisily and without warning, and without yield. This is a failure by brittle fracture. The stress at which it breaks is the tensile strength of the material. For the ductile material, the stress at which plastic deformation starts is the tensile yield stress. Both are measures of strength. It is in metals that yield and plasticity are of the greatest significance and value. In working components, yield provides a safety margin between small-strain elasticity and catastrophic rupture. […] plastic deformation is [also] exploited in making things from metals like steel and aluminium. […] A useful feature of plastic deformation in metals is that plastic straining raises the yield stress, particularly at lower temperatures.”

“Brittle failure is not only noisy but often scary. Engineers keep well away from it. An elaborate theory of fracture mechanics has been built up to help them avoid it, and there are tough materials to hand which do not easily crack. […] Since small cracks and flaws are present in almost any engineering component […], the trick is not to avoid cracks but to avoid long cracks which exceed [a] critical length. […] In materials which can yield, the tip stress can be relieved by plastic deformation, and this is a potent toughening mechanism in some materials. […] The trick of compressing a material to suppress cracking is a powerful way to toughen materials.”

“Hardness is a property which materials scientists think of in a particular and practical way. It tells us how well a material resists being damaged or deformed by a sharp object. That is useful information and it can be obtained easily. […] Soft is sometimes the opposite of hard […] But a different kind of soft is squidgy. […] In the soft box, we find many everyday materials […]. Some soft materials such as adhesives and lubricants are of great importance in engineering. For all of them, the model of a stiff crystal lattice provides no guidance. There is usually no crystal. The units are polymer chains, or small droplets of liquids, or small solid particles, with weak forces acting between them, and little structural organization. Structures when they exist are fragile. Soft materials deform easily when forces act on them […]. They sit as a rule somewhere between rigid solids and simple liquids. Their mechanical behaviour is dominated by various kinds of plasticity.”

“In pure metals, the resistivity is extremely low […] and a factor of ten covers all of them. […] the low resistivity (or, put another way, the high conductivity) arises from the existence of a conduction band in the solid which is only partly filled. Electrons in the conduction band are mobile and drift in an applied electric field. This is the electric current. The electrons are subject to some scattering from lattice vibrations which impedes their motion and generates an intrinsic resistance. Scattering becomes more severe as the temperature rises and the amplitude of the lattice vibrations becomes greater, so that the resistivity of metals increases with temperature. Scattering is further increased by microstructural heterogeneities, such as grain boundaries, lattice distortions, and other defects, and by phases of different composition. So alloys have appreciably higher resistivities than their pure parent metals. Adding 5 per cent nickel to iron doubles the resistivity, although the resistivities of the two pure metals are similar. […] Resistivity depends fundamentally on band structure. […] Plastics and rubbers […] are usually insulators. […] Electronically conducting plastics would have many uses, and some materials [e.g. this one] are now known. […] The electrical resistivity of many metals falls to exactly zero as they are cooled to very low temperatures. The critical temperature at which this happens varies, but for pure metallic elements it always lies below 10 K. For a few alloys, it is a little higher. […] Superconducting windings provide stable and powerful magnetic fields for magnetic resonance imaging, and many industrial and scientific uses.”

“A permanent magnet requires no power. Its magnetization has its origin in the motion of electrons in atoms and ions in the solid, but only a few materials have the favourable combination of quantum properties to give rise to useful ferromagnetism. […] Ferromagnetism disappears completely above the so-called Curie temperature. […] Below the Curie temperature, ferromagnetic alignment throughout the material can be established by imposing an external polarizing field to create a net magnetization. In this way a practical permanent magnet is made. The ideal permanent magnet has an intense magnetization (a strong field) which remains after the polarizing field is switched off. It can only be demagnetized by applying a strong polarizing field in the opposite direction: the size of this field is the coercivity of the magnet material. For a permanent magnet, it should be as high as possible. […] Permanent magnets are ubiquitous but more or less invisible components of umpteen devices. There are a hundred or so in every home […]. There are also important uses for ‘soft’ magnetic materials, in devices where we want the ferromagnetism to be temporary, not permanent. Soft magnets lose their magnetization after the polarizing field is removed […] They have low coercivity, approaching zero. When used in a transformer, such a soft ferromagnetic material links the input and output coils by magnetic induction. Ideally, the magnetization should reverse during every cycle of the alternating current to minimize energy losses and heating. […] Silicon transformer steels yielded large gains in efficiency in electrical power distribution when they were first introduced in the 1920s, and they remain pre-eminent.”

“At least 50 families of plastics are produced commercially today. […] These materials all consist of linear string molecules, most with simple carbon backbones, a few with carbon-oxygen backbones […] Plastics as a group are valuable because they are lightweight and work well in wet environments, and don’t go rusty. They are mostly unaffected by acids and salts. But they burn, and they don’t much like sunlight as the ultraviolet light can break the polymer backbone. Most commercial plastics are mixed with substances which make it harder for them to catch fire and which filter out the ultraviolet light. Above all, plastics are used because they can be formed and shaped so easily. The string molecule itself is held together by strong chemical bonds and is resilient, but the forces between the molecules are weak. So plastics melt at low temperatures to produce rather viscous liquids […]. And with modest heat and a little pressure, they can be injected into moulds to produce articles of almost any shape”.

“The downward cascade of high purity to adulterated materials in recycling is a kind of entropy effect: unmixing is thermodynamically hard work. But there is an energy-driven problem too. Most materials are thermodynamically unstable (or metastable) in their working environments and tend to revert to the substances from which they were made. This is well-known in the case of metals, and is the usual meaning of corrosion. The metals are more stable when combined with oxygen than uncombined. […] Broadly speaking, ceramic materials are more stable thermodynamically, since they already contain much oxygen in chemical combination. Even so, ceramics used in the open usually fall victim to some environmental predator. Often it is water that causes damage. Water steals sodium and potassium from glass surfaces by slow leaching. The surface shrinks and cracks, so the glass loses its transparency. […] Stones and bricks may succumb to the stresses of repeated freezing when wet; limestones decay also by the chemical action of sulfur and nitrogen gasses in polluted rainwater. Even buried archaeological pots slowly react with water in a remorseless process similar to that of rock weathering.”

Ashby plot.
Alan Arnold Griffith.
Creep (deformation).
Amontons’ laws of friction.
Viscoelasticity.
Internal friction.
Surfactant.
Dispersant.
Rheology.
Liquid helium.
Conductor. Insulator. Semiconductor. P-type -ll-. N-type -ll-.
Hall–Héroult process.
Cuprate.
Magnetostriction.
Snell’s law.
Chromatic aberration.
Dispersion (optics).
Dye.
Density functional theory.
Glass.
Pilkington float process.
Superalloy.
Ziegler–Natta catalyst.
Transistor.
Integrated circuit.
Negative-index metamaterial.
Auxetics.
Titanium dioxide.
Hyperfine structure (/-interactions).
Diamond anvil cell.
Synthetic rubber.
Simon–Ehrlich wager.
Sankey diagram.

November 16, 2017 Posted by | Books, Chemistry, Engineering, Physics | Leave a comment

Materials (I)…

Useful matter is a good definition of materials. […] Materials are materials because inventive people find ingenious things to do with them. Or just because people use them. […] Materials science […] explains how materials are made and how they behave as we use them.”

I recently read this book, which I liked. Below I have added some quotes from the first half of the book, with some added hopefully helpful links, as well as a collection of links at the bottom of the post to other topics covered.

“We understand all materials by knowing about composition and microstructure. Despite their extraordinary minuteness, the atoms are the fundamental units, and they are real, with precise attributes, not least size. Solid materials tend towards crystallinity (for the good thermodynamic reason that it is the arrangement of lowest energy), and they usually achieve it, though often in granular, polycrystalline forms. Processing conditions greatly influence microstructures which may be mobile and dynamic, particularly at high temperatures. […] The idea that we can understand materials by looking at their internal structure in finer and finer detail goes back to the beginnings of microscopy […]. This microstructural view is more than just an important idea, it is the explanatory framework at the core of materials science. Many other concepts and theories exist in materials science, but this is the framework. It says that materials are intricately constructed on many length-scales, and if we don’t understand the internal structure we shall struggle to explain or to predict material behaviour.”

“Oxygen is the most abundant element in the earth’s crust and silicon the second. In nature, silicon occurs always in chemical combination with oxygen, the two forming the strong Si–O chemical bond. The simplest combination, involving no other elements, is silica; and most grains of sand are crystals of silica in the form known as quartz. […] The quartz crystal comes in right- and left-handed forms. Nothing like this happens in metals but arises frequently when materials are built from molecules and chemical bonds. The crystal structure of quartz has to incorporate two different atoms, silicon and oxygen, each in a repeating pattern and in the precise ratio 1:2. There is also the severe constraint imposed by the Si–O chemical bonds which require that each Si atom has four O neighbours arranged around it at the corners of a tetrahedron, every O bonded to two Si atoms. The crystal structure which quartz adopts (which of all possibilities is the one of lowest energy) is made up of triangular and hexagonal units. But within this there are buried helixes of Si and O atoms, and a helix must be either right- or left-handed. Once a quartz crystal starts to grow as right- or left-handed, its structure templates all the other helices with the same handedness. Equal numbers of right- and left-handed crystals occur in nature, but each is unambiguously one or the other.”

“In the living tree, and in the harvested wood that we use as a material, there is a hierarchy of structural levels, climbing all the way from the molecular to the scale of branch and trunk. The stiff cellulose chains are bundled into fibrils, which are themselves bonded by other organic molecules to build the walls of cells; which in turn form channels for the transport of water and nutrients, the whole having the necessary mechanical properties to support its weight and to resist the loads of wind and rain. In the living tree, the structure allows also for growth and repair. There are many things to be learned from biological materials, but the most universal is that biology builds its materials at many structural levels, and rarely makes a distinction between the material and the organism. Being able to build materials with hierarchical architectures is still more or less out of reach in materials engineering. Understanding how materials spontaneously self-assemble is the biggest challenge in contemporary nanotechnology.”

“The example of diamond shows two things about crystalline materials. First, anything we know about an atom and its immediate environment (neighbours, distances, angles) holds for every similar atom throughout a piece of material, however large; and second, everything we know about the unit cell (its size, its shape, and its symmetry) also applies throughout an entire crystal […] and by extension throughout a material made of a myriad of randomly oriented crystallites. These two general propositions provide the basis and justification for lattice theories of material behaviour which were developed from the 1920s onwards. We know that every solid material must be held together by internal cohesive forces. If it were not, it would fly apart and turn into a gas. A simple lattice theory says that if we can work out what forces act on the atoms in one unit cell, then this should be enough to understand the cohesion of the entire crystal. […] In lattice models which describe the cohesion and dynamics of the atoms, the role of the electrons is mainly in determining the interatomic bonding and the stiffness of the bond-spring. But in many materials, and especially in metals and semiconductors, some of the electrons are free to move about within the lattice. A lattice model of electron behaviour combines a geometrical description of the lattice with a more or less mechanical view of the atomic cores, and a fully quantum theoretical description of the electrons themselves. We need only to take account of the outer electrons of the atoms, as the inner electrons are bound tightly into the cores and are not itinerant. The outer electrons are the ones that form chemical bonds, so they are also called the valence electrons.”

“It is harder to push atoms closer together than to pull them further apart. While atoms are soft on the outside, they have harder cores, and pushed together the cores start to collide. […] when we bring a trillion atoms together to form a crystal, it is the valence electrons that are disturbed as the atoms approach each other. As the atomic cores come close to the equilibrium spacing of the crystal, the electron states of the isolated atoms morph into a set of collective states […]. These collective electron states have a continuous distribution of energies up to a top level, and form a ‘band’. But the separation of the valence electrons into distinct electron-pair states is preserved in the band structure, so that we find that the collective states available to the entire population of valence electrons in the entire crystal form a set of bands […]. Thus in silicon, there are two main bands.”

“The perfect crystal has atoms occupying all the positions prescribed by the geometry of its crystal lattice. But real crystalline materials fall short of perfection […] For instance, an individual site may be unoccupied (a vacancy). Or an extra atom may be squeezed into the crystal at a position which is not a lattice position (an interstitial). An atom may fall off its lattice site, creating a vacancy and an interstitial at the same time. Sometimes a site is occupied by the wrong kind of atom. Point defects of this kind distort the crystal in their immediate neighbourhood. Vacancies free up diffusional movement, allowing atoms to hop from site to site. Larger scale defects invariably exist too. A complete layer of atoms or unit cells may terminate abruptly within the crystal to produce a line defect (a dislocation). […] There are materials which try their best to crystallize, but find it hard to do so. Many polymer materials are like this. […] The best they can do is to form small crystalline regions in which the molecules lie side by side over limited distances. […] Often the crystalline domains comprise about half the material: it is a semicrystal. […] Crystals can be formed from the melt, from solution, and from the vapour. All three routes are used in industry and in the laboratory. As a rule, crystals that grow slowly are good crystals. Geological time can give wonderful results. Often, crystals are grown on a seed, a small crystal of the same material deliberately introduced into the crystallization medium. If this is a melt, the seed can gradually be pulled out, drawing behind it a long column of new crystal material. This is the Czochralski process, an important method for making semiconductors. […] However it is done, crystals invariably grow by adding material to the surface of a small particle to make it bigger.”

“As we go down the Periodic Table of elements, the atoms get heavier much more quickly than they get bigger. The mass of a single atom of uranium at the bottom of the Table is about 25 times greater than that of an atom of the lightest engineering metal, beryllium, at the top, but its radius is only 40 per cent greater. […] The density of solid materials of every kind is fixed mainly by where the constituent atoms are in the Periodic Table. The packing arrangement in the solid has only a small influence, although the crystalline form of a substance is usually a little denser than the amorphous form […] The range of solid densities available is therefore quite limited. At the upper end we hit an absolute barrier, with nothing denser than osmium (22,590 kg/m3). At the lower end we have some slack, as we can make lighter materials by the trick of incorporating holes to make foams and sponges and porous materials of all kinds. […] in the entire catalogue of available materials there is a factor of about a thousand for ingenious people to play with, from say 20 to 20,000 kg/m3.”

“The expansion of materials as we increase their temperature is a universal tendency. It occurs because as we raise the temperature the thermal energy of the atoms and molecules increases correspondingly, and this fights against the cohesive forces of attraction. The mean distance of separation between atoms in the solid (or the liquid) becomes larger. […] As a general rule, the materials with small thermal expansivities are metals and ceramics with high melting temperatures. […] Although thermal expansion is a smooth process which continues from the lowest temperatures to the melting point, it is sometimes interrupted by sudden jumps […]. Changes in crystal structure at precise temperatures are commonplace in materials of all kinds. […] There is a cluster of properties which describe the thermal behaviour of materials. Besides the expansivity, there is the specific heat, and also the thermal conductivity. These properties show us, for example, that it takes about four times as much energy to increase the temperature of 1 kilogram of aluminium by 1°C as 1 kilogram of silver; and that good conductors of heat are usually also good conductors of electricity. At everyday temperatures there is not a huge difference in specific heat between materials. […] In all crystalline materials, thermal conduction arises from the diffusion of phonons from hot to cold regions. As they travel, the phonons are subject to scattering both by collisions with other phonons, and with defects in the material. This picture explains why the thermal conductivity falls as temperature rises”.

 

Materials science.
Metals.
Inorganic compound.
Organic compound.
Solid solution.
Copper. Bronze. Brass. Alloy.
Electrical conductivity.
Steel. Bessemer converter. Gamma iron. Alpha iron. Cementite. Martensite.
Phase diagram.
Equation of state.
Calcite. Limestone.
Birefringence.
Portland cement.
Cellulose.
Wood.
Ceramic.
Mineralogy.
Crystallography.
Laue diffraction pattern.
Silver bromide. Latent image. Photographic film. Henry Fox Talbot.
Graphene. Graphite.
Thermal expansion.
Invar.
Dulong–Petit law.
Wiedemann–Franz law.

 

November 14, 2017 Posted by | Biology, Books, Chemistry, Engineering, Physics | Leave a comment

Light

I gave the book two stars. Some quotes and links below.

“Lenses are ubiquitous in image-forming devices […] Imaging instruments have two components: the lens itself, and a light detector, which converts the light into, typically, an electrical signal. […] In every case the location of the lens with respect to the detector is a key design parameter, as is the focal length of the lens which quantifies its ‘ray-bending’ power. The focal length is set by the curvature of the surfaces of the lens and its thickness. More strongly curved surfaces and thicker materials are used to make lenses with short focal lengths, and these are used usually in instruments where a high magnification is needed, such as a microscope. Because the refractive index of the lens material usually depends on the colour of light, rays of different colours are bent by different amounts at the surface, leading to a focus for each colour occurring in a different position. […] lenses with a big diameter and a short focal length will produce the tiniest images of point-like objects. […] about the best you can do in any lens system you could actually make is an image size of approximately one wavelength. This is the fundamental limit to the pixel size for lenses used in most optical instruments, such as cameras and binoculars. […] Much more sophisticated methods are required to see even smaller things. The reason is that the wave nature of light puts a lower limit on the size of a spot of light. […] At the other extreme, both ground- and space-based telescopes for astronomy are very large instruments with relatively simple optical imaging components […]. The distinctive feature of these imaging systems is their size. The most distant stars are very, very faint. Hardly any of their light makes it to the Earth. It is therefore very important to collect as much of it as possible. This requires a very big lens or mirror”.

“[W]hat sort of wave is light? This was […] answered in the 19th century by James Clerk Maxwell, who showed that it is an oscillation of a new kind of entity: the electromagnetic field. This field is effectively a force that acts on electric charges and magnetic materials. […] In the early 19th century, Michael Faraday had shown the close connections between electric and magnetic fields. Maxwell brought them together, as the electromagnetic force field. […] in the wave model, light can be considered as very high frequency oscillations of the electromagnetic field. One consequence of this idea is that moving electric charges can generate light waves. […] When […] charges accelerate — that is, when they change their speed or their direction of motion — then a simple law of physics is that they emit light. Understanding this was one of the great achievements of the theory of electromagnetism.”

“It was the observation of interference effects in a famous experiment by Thomas Young in 1803 that really put the wave picture of light as the leading candidate as an explanation of the nature of light. […] It is interference of light waves that causes the colours in a thin film of oil floating on water. Interference transforms very small distances, on the order of the wavelength of light, into very big changes in light intensity — from no light to four times as bright as the individual constituent waves. Such changes in intensity are easy to detect or see, and thus interference is a very good way to measure small changes in displacement on the scale of the wavelength of light. Many optical sensors are based on interference effects.”

“[L]ight beams […] gradually diverge as they propagate. This is because a beam of light, which by definition has a limited spatial extent, must be made up of waves that propagate in more than one direction. […] This phenomenon is called diffraction. […] if you want to transmit light over long distances, then diffraction could be a problem. It will cause the energy in the light beam to spread out, so that you would need a bigger and bigger optical system and detector to capture all of it. This is important for telecommunications, since nearly all of the information transmitted over long-distance communications links is encoded on to light beams. […] The means to manage diffraction so that long-distance communication is possible is to use wave guides, such as optical fibres.”

“[O]ptical waves […] guided along a fibre or in a glass ‘chip’ […] underpins the long-distance telecommunications infrastructure that connects people across different continents and powers the Internet. The reason it is so effective is that light-based communications have much more capacity for carrying information than do electrical wires, or even microwave cellular networks. […] In optical communications, […] bits are represented by the intensity of the light beam — typically low intensity is a 0 and higher intensity a 1. The more of these that arrive per second, the faster the communication rate. […] Why is optics so good for communications? There are two reasons. First, light beams don’t easily influence each other, so that a single fibre can support many light pulses (usually of different colours) simultaneously without the messages getting scrambled up. The reason for this is that the glass of which the fibre is made does not absorb light (or only absorbs it in tiny amounts), and so does not heat up and disrupt other pulse trains. […] the ‘crosstalk’ between light beams is very weak in most materials, so that many beams can be present at once without causing a degradation of the signal. This is very different from electrons moving down a copper wire, which is the usual way in which local ‘wired’ communications links function. Electrons tend to heat up the wire, dissipating their energy. This makes the signals harder to receive, and thus the number of different signal channels has to be kept small enough to avoid this problem. Second, light waves oscillate at very high frequencies, and this allows very short pulses to be generated This means that the pulses can be spaced very close together in time, making the transmission of more bits of information per second possible. […] Fibre-based optical networks can also support a very wide range of colours of light.”

“Waves can be defined by their wavelength, amplitude, and phase […]. Particles are defined by their position and direction of travel […], and a collection of particles by their density […] and range of directions. The media in which the light moves are characterized by their refractive indices. This can vary across space. […] Hamilton showed that what was important was how rapidly the refractive index changed in space compared with the length of an optical wave. That is, if the changes in index took place on a scale of close to a wavelength, then the wave character of light was evident. If it varied more smoothly and very slowly in space then the particle picture provided an adequate description. He showed how the simpler ray picture emerges from the more complex wave picture in certain commonly encountered situations. The appearance of wave-like phenomena, such as diffraction and interference, occurs when the size scales of the wavelength of light and the structures in which it propagates are similar. […] Particle-like behaviour — motion along a well-defined trajectory — is sufficient to describe the situation when all objects are much bigger than the wavelength of light, and have no sharp edges.”

“When things are heated up, they change colour. Take a lump of metal. As it gets hotter and hotter it first glows red, then orange, and then white. Why does this happen? This question stumped many of the great scientists [in the 19th century], including Maxwell himself. The problem was that Maxwell’s theory of light, when applied to this problem, indicated that the colour should get bluer and bluer as the temperature increased, without a limit, eventually moving out of the range of human vision into the ultraviolet—beyond blue—region of the spectrum. But this does not happen in practice. […] Max Planck […] came up with an idea to explain the spectrum emitted by hot objects — so-called ‘black bodies’. He conjectured that when light and matter interact, they do so only by exchanging discrete ‘packets’, or quanta, or energy. […] this conjecture was set to radically change physics.”

“What Dirac did was to develop a quantum mechanical version of Maxwell’s theory of electromagnetic fields. […] It set the quantum field up as the fundamental entity on which the universe is built — neither particle nor wave, but both at once; complete wave–particle duality. It is a beautiful reconciliation of all the phenomena that light exhibits, and provides a framework in which to understand all optical effects, both those from the classical world of Newton, Maxwell, and Hamilton and those of the quantum world of Planck, Einstein, and Bohr. […] Light acts as a particle of more or less well-defined energy when it interacts with matter. Yet it retains its ability to exhibit wave-like phenomena at the same time. The resolution [was] a new concept: the quantum field. Light particles — photons — are excitations of this field, which propagates according to quantum versions of Maxwell’s equations for light waves. Quantum fields, of which light is perhaps the simplest example, are now regarded as being the fundamental entities of the universe, underpinning all types of material and non-material things. The only explanation is that the stuff of the world is neither particle nor wave but both. This is the nature of reality.”

Some links:

Light.
Optics.
Watt.
Irradiance.
Coherence (physics).
Electromagnetic spectrum.
Joseph von Fraunhofer.
Spectroscopy.
Wave.
Transverse wave.
Wavelength.
Spatial frequency.
Polarization (waves).
Specular reflection.
Negative-index metamaterial.
Birefringence.
Interference (wave propagation).
Diffraction.
Young’s interference experiment.
Holography.
Photoactivated localization microscopy.
Stimulated emission depletion (STED) microscopy.
Fourier’s theorem (I found it hard to find a good source on this one. According to the book, “Fourier’s theorem says in simple terms that the smaller you focus light, the broader the range of wave directions you need to achieve this spot”)
X-ray diffraction.
Brewster’s angle.
Liquid crystal.
Liquid crystal display.
Wave–particle duality.
Fermat’s principle.
Wavefront.
Maupertuis’ principle.
Johann Jakob Balmer.
Max Planck.
Photoelectric effect.
Niels Bohr.
Matter wave.
Quantum vacuum.
Lamb shift.
Light-emitting diode.
Fluorescent tube.
Synchrotron radiation.
Quantum state.
Quantum fluctuation.
Spontaneous emission/stimulated emission.
Photodetector.
Laser.
Optical cavity.
X-ray absorption spectroscopy.
Diamond Light Source.
Mode-locking.
Stroboscope.
Femtochemistry.
Spacetime.
Atomic clock.
Time dilation.
High harmonic generation.
Frequency comb.
Optical tweezers.
Bose–Einstein condensate.
Pump probe spectroscopy.
Vulcan laser.
Plasma (physics).
Nonclassical light.
Photon polarization.
Quantum entanglement.
Bell test experiments.
Quantum key distribution/Quantum cryptography/Quantum computing.

August 31, 2017 Posted by | Books, Chemistry, Computer science, Engineering, Physics | Leave a comment

Magnetism

This book was ‘okay…ish’, but I must admit I was a bit disappointed; the coverage was much too superficial, and I’m reasonably sure the lack of formalism made the coverage harder for me to follow than it could have been. I gave the book two stars on goodreads.

Some quotes and links below.

Quotes:

“In the 19th century, the principles were established on which the modern electromagnetic world could be built. The electrical turbine is the industrialized embodiment of Faraday’s idea of producing electricity by rotating magnets. The turbine can be driven by the wind or by falling water in hydroelectric power stations; it can be powered by steam which is itself produced by boiling water using the heat produced from nuclear fission or burning coal or gas. Whatever the method, rotating magnets inducing currents feed the appetite of the world’s cities for electricity, lighting our streets, powering our televisions and computers, and providing us with an abundant source of energy. […] rotating magnets are the engine of the modern world. […] Modern society is built on the widespread availability of cheap electrical power, and almost all of it comes from magnets whirling around in turbines, producing electric current by the laws discovered by Oersted, Ampère, and Faraday.”

“Maxwell was the first person to really understand that a beam of light consists of electric and magnetic oscillations propagating together. The electric oscillation is in one plane, at right angles to the magnetic oscillation. Both of them are in directions at right angles to the direction of propagation. […] The oscillations of electricity and magnetism in a beam of light are governed by Maxwell’s four beautiful equations […] Above all, Einstein’s work on relativity was motivated by a desire to preserve the integrity of Maxwell’s equations at all costs. The problem was this: Maxwell had derived a beautiful expression for the speed of light, but the speed of light with respect to whom? […] Einstein deduced that the way to fix this would be to say that all observers will measure the speed of any beam of light to be the same. […] Einstein showed that magnetism is a purely relativistic effect, something that wouldn’t even be there without relativity. Magnetism is an example of relativity in everyday life. […] Magnetic fields are what electric fields look like when you are moving with respect to the charges that ‘cause’ them. […] every time a magnetic field appears in nature, it is because a charge is moving with respect to the observer. Charge flows down a wire to make an electric current and this produces magnetic field. Electrons orbit an atom and this ‘orbital’ motion produces a magnetic field. […] the magnetism of the Earth is due to electrical currents deep inside the planet. Motion is the key in each and every case, and magnetic fields are the evidence that charge is on the move. […] Einstein’s theory of relativity casts magnetism in a new light. Magnetic fields are a relativistic correction which you observe when charges move relative to you.”

“[T]he Bohr–van Leeuwen theorem […] states that if you assume nothing more than classical physics, and then go on to model a material as a system of electrical charges, then you can show that the system can have no net magnetization; in other words, it will not be magnetic. Simply put, there are no lodestones in a purely classical Universe. This should have been a revolutionary and astonishing result, but it wasn’t, principally because it came about 20 years too late to knock everyone’s socks off. By 1921, the initial premise of the Bohr–van Leeuwen theorem, the correctness of classical physics, was known to be wrong […] But when you think about it now, the Bohr–van Leeuwen theorem gives an extraordinary demonstration of the failure of classical physics. Just by sticking a magnet to the door of your refrigerator, you have demonstrated that the Universe is not governed by classical physics.”

“[M]ost real substances are weakly diamagnetic, meaning that when placed in a magnetic field they become weakly magnetic in the opposite direction to the field. Water does this, and since animals are mostly water, it applies to them. This is the basis of Andre Geim’s levitating frog experiment: a live frog is placed in a strong magnetic field and because of its diamagnetism it becomes weakly magnetic. In the experiment, a non-uniformity of the magnetic field induces a force on the frog’s induced magnetism and, hey presto, the frog levitates in mid-air.”

“In a conventional hard disk technology, the disk needs to be spun very fast, around 7,000 revolutions per minute. […] The read head floats on a cushion of air about 15 nanometres […] above the surface of the rotating disk, reading bits off the disk at tens of megabytes per second. This is an extraordinary engineering achievement when you think about it. If you were to scale up a hard disk so that the disk is a few kilometres in diameter rather a few centimetres, then the read head would be around the size of the White House and would be floating over the surface of the disk on a cushion of air one millimetre thick (the diameter of the head of a pin) while the disk rotated below it at a speed of several million miles per hour (fast enough to go round the equator a couple of dozen times in a second). On this scale, the bits would be spaced a few centimetres apart around each track. Hard disk drives are remarkable. […] Although hard disks store an astonishing amount of information and are cheap to manufacture, they are not fast information retrieval systems. To access a particular piece of information involves moving the head and rotating the disk to a particular spot, taking perhaps a few milliseconds. This sounds quite rapid, but with processors buzzing away and performing operations every nanosecond or so, a few milliseconds is glacial in comparison. For this reason, modern computers often use solid state memory to store temporary information, reserving the hard disk for longer-term bulk storage. However, there is a trade-off between cost and performance.”

“In general, there is a strong economic drive to store more and more information in a smaller and smaller space, and hence a need to find a way to make smaller and smaller bits. […] [However] greater miniturization comes at a price. The point is the following: when you try to store a bit of information in a magnetic medium, an important constraint on the usefulness of the technology is how long the information will last for. Almost always the information is being stored at room temperature and so needs to be robust to the ever present random jiggling effects produced by temperature […] It turns out that the crucial parameter controlling this robustness is the ratio of the energy needed to reverse the bit of information (in other words, the energy required to change the magnetization from one direction to the reverse direction) to a characteristic energy associated with room temperature (an energy which is, expressed in electrical units, approximately one-fortieth of a Volt). So if the energy to flip a magnetic bit is very large, the information can persist for thousands of years […] while if it is very small, the information might only last for a small fraction of a second […] This energy is proportional to the volume of the magnetic bit, and so one immediately sees a problem with making bits smaller and smaller: though you can store bits of information at higher density, there is a very real possibility that the information might be very rapidly scrambled by thermal fluctuations. This motivates the search for materials in which it is very hard to flip the magnetization from one state to the other.”

“The change in the Earth’s magnetic field over time is a fairly noticeable phenomenon. Every decade or so, compass needles in Africa are shifting by a degree, and the magnetic field overall on planet Earth is about 10% weaker than it was in the 19th century.”

Below I have added some links to topics and people covered/mentioned in the book. Many of the links below have likely also been included in some of the other posts about books from the A Brief Introduction OUP physics series which I’ve posted this year – the main point of adding these links is to give some idea what kind of stuff’s covered in the book:

Magnetism.
Magnetite.
Lodestone.
William Gilbert/De Magnete.
Alessandro Volta.
Ampère’s circuital law.
Charles-Augustin de Coulomb.
Hans Christian Ørsted.
Leyden jar
/voltaic cell/battery (electricity).
Solenoid.
Electromagnet.
Homopolar motor.
Michael Faraday.
Electromagnetic induction.
Dynamo.
Zeeman effect.
Alternating current/Direct current.
Nikola Tesla.
Thomas Edison.
Force field (physics).
Ole Rømer.
Centimetre–gram–second system of units.
James Clerk Maxwell.
Maxwell’s equations.
Permittivity.
Permeability (electromagnetism).
Gauss’ law.
Michelson–Morley experiment
.
Special relativity.
Drift velocity.
Curie’s law.
Curie temperature.
Andre Geim.
Diamagnetism.
Paramagnetism.
Exchange interaction.
Magnetic domain.
Domain wall (magnetism).
Stern–Gerlach experiment.
Dirac equation.
Giant magnetoresistance.
Spin valve.
Racetrack memory.
Perpendicular recording.
Bubble memory (“an example of a brilliant idea which never quite made it”, as the author puts it).
Single-molecule magnet.
Spintronics.
Earth’s magnetic field.
Aurora.
Van Allen radiation belt.
South Atlantic Anomaly.
Geomagnetic storm.
Geomagnetic reversal.
Magnetar.
ITER (‘International Thermonuclear Experimental Reactor’).
Antiferromagnetism.
Spin glass.
Quantum spin liquid.
Multiferroics.
Spin ice.
Magnetic monopole.
Ice rules.

August 28, 2017 Posted by | Books, Computer science, Engineering, Geology, Physics | Leave a comment