The Computer

Below some quotes and links related to the book‘s coverage:

“At the heart of every computer is one or more hardware units known as processors. A processor controls what the computer does. For example, it will process what you type in on your computer’s keyboard, display results on its screen, fetch web pages from the Internet, and carry out calculations such as adding two numbers together. It does this by ‘executing’ a computer program that details what the computer should do […] Data and programs are stored in two storage areas. The first is known as main memory and has the property that whatever is stored there can be retrieved very quickly. Main memory is used for transient data – for example, the result of a calculation which is an intermediate result in a much bigger calculation – and is also used to store computer programs while they are being executed. Data in main memory is transient – it will disappear when the computer is switched off. Hard disk memory, also known as file storage or backing storage, contains data that are required over a period of time. Typical entities that are stored in this memory include files of numerical data, word-processed documents, and spreadsheet tables. Computer programs are also stored here while they are not being executed. […] There are a number of differences between main memory and hard disk memory. The first is the retrieval time. With main memory, an item of data can be retrieved by the processor in fractions of microseconds. With file-based memory, the retrieval time is much greater: of the order of milliseconds. The reason for this is that main memory is silicon-based […] hard disk memory is usually mechanical and is stored on the metallic surface of a disk, with a mechanical arm retrieving the data. […] main memory is more expensive than file-based memory”.

The Internet is a network of computers – strictly, it is a network that joins up a number of networks. It carries out a number of functions. First, it transfers data from one computer to another computer […] The second function of the Internet is to enforce reliability. That is, to ensure that when errors occur then some form of recovery process happens; for example, if an intermediate computer fails then the software of the Internet will discover this and resend any malfunctioning data via other computers. A major component of the Internet is the World Wide Web […] The web […] uses the data-transmission facilities of the Internet in a specific way: to store and distribute web pages. The web consists of a number of computers known as web servers and a very large number of computers known as clients (your home PC is a client). Web servers are usually computers that are more powerful than the PCs that are normally found in homes or those used as office computers. They will be maintained by some enterprise and will contain individual web pages relevant to that enterprise; for example, an online book store such as Amazon will maintain web pages for each item it sells. The program that allows users to access the web is known as a browser. […] A part of the Internet known as the Domain Name System (usually referred to as DNS) will figure out where the page is held and route the request to the web server holding the page. The web server will then send the page back to your browser which will then display it on your computer. Whenever you want another page you would normally click on a link displayed on that page and the process is repeated. Conceptually, what happens is simple. However, it hides a huge amount of detail involving the web discovering where pages are stored, the pages being located, their being sent, the browser reading the pages and interpreting how they should be displayed, and eventually the browser displaying the pages. […] without one particular hardware advance the Internet would be a shadow of itself: this is broadband. This technology has provided communication speeds that we could not have dreamed of 15 years ago. […] Typical broadband speeds range from one megabit per second to 24 megabits per second, the lower rate being about 20 times faster than dial-up rates.”

“A major idea I hope to convey […] is that regarding the computer as just the box that sits on your desk, or as a chunk of silicon that is embedded within some device such as a microwave, is only a partial view. The Internet – or rather broadband access to the Internet – has created a gigantic computer that has unlimited access to both computer power and storage to the point where even applications that we all thought would never migrate from the personal computer are doing just that. […] the Internet functions as a series of computers – or more accurately computer processors – carrying out some task […]. Conceptually, there is little difference between these computers and [a] supercomputer, the only difference is in the details: for a supercomputer the communication between processors is via some internal electronic circuit, while for a collection of computers working together on the Internet the communication is via external circuits used for that network.”

“A computer will consist of a number of electronic circuits. The most important is the processor: this carries out the instructions that are contained in a computer program. […] There are a number of individual circuit elements that make up the computer. Thousands of these elements are combined together to construct the computer processor and other circuits. One basic element is known as an And gate […]. This is an electrical circuit that has two binary inputs A and B and a single binary output X. The output will be one if both the inputs are one and zero otherwise. […] the And gate is only one example – when some action is required, for example adding two numbers together, [the different circuits] interact with each other to carry out that action. In the case of addition, the two binary numbers are processed bit by bit to carry out the addition. […] Whatever actions are taken by a program […] the cycle is the same; an instruction is read into the processor, the processor decodes the instruction, acts on it, and then brings in the next instruction. So, at the heart of a computer is a series of circuits and storage elements that fetch and execute instructions and store data and programs.”

“In essence, a hard disk unit consists of one or more circular metallic disks which can be magnetized. Each disk has a very large number of magnetizable areas which can either represent zero or one depending on the magnetization. The disks are rotated at speed. The unit also contains an arm or a number of arms that can move laterally and which can sense the magnetic patterns on the disk. […] When a processor requires some data that is stored on a hard disk […] then it issues an instruction to find the file. The operating system – the software that controls the computer – will know where the file starts and ends and will send a message to the hard disk to read the data. The arm will move laterally until it is over the start position of the file and when the revolving disk passes under the arm the magnetic pattern that represents the data held in the file is read by it. Accessing data on a hard disk is a mechanical process and usually takes a small number of milliseconds to carry out. Compared with the electronic speeds of the computer itself – normally measured in fractions of a microsecond – this is incredibly slow. Because disk access is slow, systems designers try to minimize the amount of access required to files. One technique that has been particularly effective is known as caching. It is, for example, used in web servers. Such servers store pages that are sent to browsers for display. […] Caching involves placing the frequently accessed pages in some fast storage medium such as flash memory and keeping the remainder on a hard disk.”

“The first computers had a single hardware processor that executed individual instructions. It was not too long before researchers started thinking about computers that had more than one processor. The simple theory here was that if a computer had n processors then it would be n times faster. […] it is worth debunking this notion. If you look at many classes of problems […], you see that a strictly linear increase in performance is not achieved. If a problem that is solved by a single computer is solved in 20 minutes, then you will find a dual processor computer solving it in perhaps 11 minutes. A 3-processor computer may solve it in 9 minutes, and a 4-processor computer in 8 minutes. There is a law of diminishing returns; often, there comes a point when adding a processor slows down the computation. What happens is that each processor needs to communicate with the others, for example passing on the result of a computation; this communicational overhead becomes bigger and bigger as you add processors to the point when it dominates the amount of useful work that is done. The sort of problems where they are effective is where a problem can be split up into sub-problems that can be solved almost independently by each processor with little communication.”

Symmetric encryption methods are very efficient and can be used to scramble large files or long messages being sent from one computer to another. Unfortunately, symmetric techniques suffer from a major problem: if there are a number of individuals involved in a data transfer or in reading a file, each has to know the same key. This makes it a security nightmare. […] public key cryptography removed a major problem associated with symmetric cryptography: that of a large number of keys in existence some of which may be stored in an insecure way. However, a major problem with asymmetric cryptography is the fact that it is very inefficient (about 10,000 times slower than symmetric cryptography): while it can be used for short messages such as email texts, it is far too inefficient for sending gigabytes of data. However, […] when it is combined with symmetric cryptography, asymmetric cryptography provides very strong security. […] One very popular security scheme is known as the Secure Sockets Layer – normally shortened to SSL. It is based on the concept of a one-time pad. […] SSL uses public key cryptography to communicate the randomly generated key between the sender and receiver of a message. This key is only used once for the data interchange that occurs and, hence, is an electronic analogue of a one-time pad. When each of the parties to the interchange has received the key, they encrypt and decrypt the data employing symmetric cryptography, with the generated key carrying out these processes. […] There is an impression amongst the public that the main threats to security and to privacy arise from technological attack. However, the threat from more mundane sources is equally high. Data thefts, damage to software and hardware, and unauthorized access to computer systems can occur in a variety of non-technical ways: by someone finding computer printouts in a waste bin; by a window cleaner using a mobile phone camera to take a picture of a display containing sensitive information; by an office cleaner stealing documents from a desk; by a visitor to a company noting down a password written on a white board; by a disgruntled employee putting a hammer through the main server and the backup server of a company; or by someone dropping an unencrypted memory stick in the street.”

“The basic architecture of the computer has remained unchanged for six decades since IBM developed the first mainframe computers. It consists of a processor that reads software instructions one by one and executes them. Each instruction will result in data being processed, for example by being added together; and data being stored in the main memory of the computer or being stored on some file-storage medium; or being sent to the Internet or to another computer. This is what is known as the von Neumann architecture; it was named after John von Neumann […]. His key idea, which still holds sway today, is that in a computer the data and the program are both stored in the computer’s memory in the same address space. There have been few challenges to the von Neumann architecture.”

[A] ‘neural network‘ […] consists of an input layer that can sense various signals from some environment […]. In the middle (hidden layer), there are a large number of processing elements (neurones) which are arranged into sub-layers. Finally, there is an output layer which provides a result […]. It is in the middle layer that the work is done in a neural computer. What happens is that the network is trained by giving it examples of the trend or item that is to be recognized. What the training does is to strengthen or weaken the connections between the processing elements in the middle layer until, when combined, they produce a strong signal when a new case is presented to them that matches the previously trained examples and a weak signal when an item that does not match the examples is encountered. Neural networks have been implemented in hardware, but most of the implementations have been via software where the middle layer has been implemented in chunks of code that carry out the learning process. […] although the initial impetus was to use ideas in neurobiology to develop neural architectures based on a consideration of processes in the brain, there is little resemblance between the internal data and software now used in commercial implementations and the human brain.”


Byte. Bit.
Moore’s law.
Computer program.
Programming language. High-level programming language. Low-level programming language.
Zombie (computer science).
Cloud computing.
Instructions per second.
Fetch-execute cycle.
Grace Hopper. Software Bug.
Transistor. Integrated circuit. Very-large-scale integration. Wafer (electronics). Photomask.
Read-only memory (ROM). Read-write memory (RWM). Bus (computing). Address bus. Programmable read-only memory (PROM). Erasable programmable read-only memory (EPROM). Electrically erasable programmable read-only memory (EEPROM). Flash memory. Dynamic random-access memory (DRAM). Static random-access memory (static RAM/SRAM).
Hard disc.
Wireless communication.
Radio-frequency identification (RFID).
NP-hardness. Set partition problem. Bin packing problem.
Cray X-MP. Beowulf cluster.
Vector processor.
Denial-of-service attack. Melissa (computer virus). Malware. Firewall (computing). Logic bomb. Fork bomb/rabbit virus. Cryptography. Caesar cipher. Social engineering (information security).
Application programming interface.
Data mining. Machine translation. Machine learning.
Functional programming.
Quantum computing.


March 19, 2018 Posted by | Books, Computer science, Engineering | Leave a comment

Marine Biology (II)

Below some observations and links related to the second half of the book’s coverage:

[C]oral reefs occupy a very small proportion of the planet’s surface – about 284,000 square kilometres – roughly equivalent to the size of Italy [yet they] are home to an incredibly diversity of marine organisms – about a quarter of all marine species […]. Coral reef systems provide food for hundreds of millions of people, with about 10 per cent of all fish consumed globally caught on coral reefs. […] Reef-building corals thrive best at sea temperatures above about 23°C and few exist where sea temperatures fall below 18°C for significant periods of time. Thus coral reefs are absent at tropical latitudes where upwelling of cold seawater occurs, such as the west coasts of South America and Africa. […] they are generally restricted to areas of clear water less than about 50 metres deep. Reef-building corals are very intolerant of any freshening of seawater […] and so do not occur in areas exposed to intermittent influxes of freshwater, such as near the mouths of rivers, or in areas where there are high amounts of rainfall run-off. This is why coral reefs are absent along much of the tropical Atlantic coast of South America, which is exposed to freshwater discharge from the Amazon and Orinoco Rivers. Finally, reef-building corals flourish best in areas with moderate to high wave action, which keeps the seawater well aerated […]. Spectacular and productive coral reef systems have developed in those parts of the Global Ocean where this special combination of physical conditions converges […] Each colony consists of thousands of individual animals called polyps […] all reef-building corals have entered into an intimate relationship with plant cells. The tissues lining the inside of the tentacles and stomach cavity of the polyps are packed with photosynthetic cells called zooxanthellae, which are photosynthetic dinoflagellates […] Depending on the species, corals receive anything from about 50 per cent to 95 per cent of their food from their zooxanthellae. […] Healthy coral reefs are very productive marine systems. This is in stark contrast to the nutrient-poor and unproductive tropical waters adjacent to reefs. Coral reefs are, in general, roughly one hundred times more productive than the surrounding environment”.

“Overfishing constitutes a significant threat to coral reefs at this time. About an eighth of the world’s population – roughly 875 million people – live within 100 kilometres of a coral reef. Most of the people live in developing countries and island nations and depend greatly on fish obtained from coral reefs as a food source. […] Some of the fishing practices are very harmful. Once the large fish are removed from a coral reef, it becomes increasingly more difficult to make a living harvesting the more elusive and lower-value smaller fish that remain. Fishers thus resort to more destructive techniques such as dynamiting parts of the reef and scooping up the dead and stunned fish that float to the surface. People capturing fish for the tropical aquarium trade will often poison parts of the reef with sodium cyanide which paralyses the fish, making them easier to catch. An unfortunate side effect of this practice is that the poison kills corals. […] Coral reefs have only been seriously studied since the 1970s, which in most cases was well after human impacts had commenced. This makes it difficult to define what might actually constitute a ‘natural’ and healthy coral reef system, as would have existed prior to extensive human impacts.”

“Mangrove is a collective term applied to a diverse group of trees and scrubs that colonize protected muddy intertidal areas in tropical and subtropical regions, creating mangrove forests […] Mangroves are of great importance from a human perspective. The sheltered waters of a mangrove forest provide important nursery areas for juvenile fish, crabs, and shrimp. Many commercial fisheries depend on the existence of healthy mangrove forests, including blue crab, shrimp, spiny lobster, and mullet fisheries. Mangrove forests also stabilize the foreshore and protect the adjacent land from erosion, particularly from the effects of large storms and tsunamis. They also act as biological filters by removing excess nutrients and trapping sediment from land run-off before it enters the coastal environment, thereby protecting other habitats such as seagrass meadows and coral reefs. […] [However] mangrove forests are disappearing rapidly. In a twenty-year period between 1980 and 2000 the area of mangrove forest globally declined from around 20 million hectares to below 15 million hectares. In some specific regions the rate of mangrove loss is truly alarming. For example, Puerto Rico lost about 89 per cent of its mangrove forests between 1930 and 1985, while the southern part of India lost about 96 per cent of its mangroves between 1911 and 1989.”

“[A]bout 80 per cent of the entire volume of the Global Ocean, or roughly one billion cubic kilometres, consists of seawater with depths greater than 1,000 metres […] The deep ocean is a permanently dark environment devoid of sunlight, the last remnants of which cannot penetrate much beyond 200 metres in most parts of the Global Ocean, and no further than 800 metres or so in even the clearest oceanic waters. The only light present in the deep ocean is of biological origin […] Except in a few very isolated places, the deep ocean is a permanently cold environment, with sea temperatures ranging from about 2° to 4°C. […] Since there is no sunlight, there is no plant life, and thus no primary production of organic matter by photosynthesis. The base of the food chain in the deep ocean consists mostly of a ‘rain’ of small particles of organic material sinking down through the water column from the sunlit surface waters of the ocean. This reasonably constant rain of organic material is supplemented by the bodies of large fish and marine mammals that sink more rapidly to the bottom following death, and which provide sporadic feasts for deep-ocean bottom dwellers. […] Since food is a scarce commodity for deep-ocean fish, full advantage must be taken of every meal encountered. This has resulted in a number of interesting adaptations. Compared to fish in the shallow ocean, many deep-ocean fish have very large mouths capable of opening very wide, and often equipped with numerous long, sharp, inward-pointing teeth. […] These fish can capture and swallow whole prey larger than themselves so as not to pass up a rare meal simply because of its size. These fish also have greatly extensible stomachs to accommodate such meals.”

“In the pelagic environment of the deep ocean, animals must be able to keep themselves within an appropriate depth range without using up energy in their food-poor habitat. This is often achieved by reducing the overall density of the animal to that of seawater so that it is neutrally buoyant. Thus the tissues and bones of deep-sea fish are often rather soft and watery. […] There is evidence that deep-ocean organisms have developed biochemical adaptations to maintain the functionality of their cell membranes under pressure, including adjusting the kinds of lipid molecules present in membranes to retain membrane fluidity under high pressure. High pressures also affect protein molecules, often preventing them from folding up into the correct shapes for them to function as efficient metabolic enzymes. There is evidence that deep-ocean animals have evolved pressure-resistant variants of common enzymes that mitigate this problem. […] The pattern of species diversity of the deep-ocean benthos appears to differ from that of other marine communities, which are typically dominated by a small number of abundant and highly visible species which overshadow the presence of a large number of rarer and less obvious species which are also present. In the deep-ocean benthic community, in contrast, no one group of species tends to dominate, and the community consists of a high number of different species all occurring in low abundance. […] In general, species diversity increases with the size of a habitat – the larger the area of a habitat, the more species that have developed ways to successfully live in that habitat. Since the deep-ocean bottom is the largest single habitat on the planet, it follows that species diversity would be expected to be high.”

Seamounts represent a special kind of biological hotspot in the deep ocean. […] In contrast to the surrounding flat, soft-bottomed abyssal plains, seamounts provide a complex rocky platform that supports an abundance of organisms that are distinct from the surrounding deep-ocean benthos. […] Seamounts support a great diversity of fish species […] This [has] triggered the creation of new deep-ocean fisheries focused on seamounts. […] [However these species are generally] very slow-growing and long-lived and mature at a late age, and thus have a low reproductive potential. […] Seamount fisheries have often been described as mining operations rather than sustainable fisheries. They typically collapse within a few years of the start of fishing and the trawlers then move on to other unexplored seamounts to maintain the fishery. The recovery of localized fisheries will inevitably be very slow, if achievable at all, because of the low reproductive potential of these deep-ocean fish species. […] Comparisons of ‘fished’ and ‘unfished’ seamounts have clearly shown the extent of habitat damage and loss of species diversity brought about by trawl fishing, with the dense coral habitats reduced to rubble over much of the area investigated. […] Unfortunately, most seamounts exist in areas beyond national jurisdiction, which makes it very difficult to regulate fishing activities on them, although some efforts are underway to establish international treaties to better manage and protect seamount ecosystems.”

“Hydrothermal vents are unstable and ephemeral features of the deep ocean. […] The lifespan of a typical vent is likely in the order of tens of years. Thus the rich communities surrounding vents have a very limited lifespan. Since many vent animals can live only near vents, and the distance between vent systems can be hundreds to thousands of kilometres, it is a puzzle as to how vent animals escape a dying vent and colonize other distant vents or newly created vents. […] Hydrothermal vents are [however] not the only source of chemical-laden fluids supporting unique chemosynthetic-based communities in the deep ocean. Hydrogen sulphide and methane also ooze from the ocean buttom at some locations at temperatures similar to the surrounding seawater. These so-called ‘cold seeps‘ are often found along continental margins […] The communities associated with cold seeps are similar to hydrothermal vent communities […] Cold seeps appear to be more permanent sources of fluid compared to the ephemeral nature of hot water vents.”

“Seepage of crude oil into the marine environment occurs naturally from oil-containing geological formations below the seabed. It is estimated that around 600,000 tonnes of crude oil seeps into the marine environment each year, which represents almost half of all the crude oil entering the oceans. […] The human activities associated with exploring for and producing oil result in the release on average of an estimated 38,000 tonnes of crude oil into the oceans each year, which is about 6 per cent of the total anthropogenic input of oil into the oceans worldwide. Although small in comparison to natural seepage, crude oil pollution from this source can cause serious damage to coastal ecosystems because it is released near the coast and sometimes in very large, concentrated amounts. […] The transport of oil and oil products around the globe in tankers results in the release of about 150,000 tonnes of oil worldwide each year on average, or about 22 per cent of the total anthropogenic input. […] About 480,000 tonnes of oil make their way into the marine environment each year worldwide from leakage associated with the consumption of oil-derived products in cars and trucks, and to a lesser extent in boats. Oil lost from the operation of cars and trucks collects on paved urban areas from where it is washed off into streams and rivers, and from there into the oceans. Surprisingly, this represents the most significant source of human-derived oil pollution into the marine environment – about 72 per cent of the total. Because it is a very diffuse source of pollution, it is the most difficult to control.”

“Today it has been estimated that virtually all of the marine food resources in the Mediterranean sea have been reduced to less than 50 per cent of their original abundance […] The greatest impact has been on the larger predatory fish, which were the first to be targeted by fishers. […] It is estimated that, collectively, the European fish stocks of today are just one-tenth of their size in 1900. […] In 1950 the total global catch of marine seafood was just less than twenty million tonnes fresh weight. This increased steadily and rapidly until by the late 1980s more than eighty million tonnes were being taken each year […] Starting in the early 1990s, however, yields began to show signs of levelling off. […] By far the most heavily exploited marine fishery in the world is the Peruvian anchoveta (Engraulis ringens) fishery, which can account for 10 per cent or more of the global marine catch of seafood in any particular year. […] The anchoveta is a very oily fish, which makes it less desirable for direct consumption by humans. However, the high oil content makes it ideal for the production of fish meal and fish oil […] the demand for fish meal and fish oil is huge and about a third of the entire global catch of fish is converted into these products rather than consumed directly by humans. Feeding so much fish protein to livestock comes with a considerable loss of potential food energy (around 25 per cent) compared to if it was eaten directly by humans. This could be viewed as a potential waste of available energy for a rapidly growing human population […] around 90 per cent of the fish used to produce fish meal and oil is presently unpalatable to most people and thus unmarketable in large quantities as a human food”.

“On heavily fished areas of the continental shelves, the same parts of the sea floor can be repeatedly trawled many times per year. Such intensive bottom trawling causes great cumulative damage to seabed habitats. The trawls scrape and pulverize rich and complex bottom habitats built up over centuries by living organisms such as tube worms, cold-water corals, and oysters. These habitats are eventually reduced to uniform stretches of rubble and sand. For all intents and purposes these areas are permanently altered and become occupied by a much changed and much less rich community adapted to frequent disturbance.”

“The eighty million tonnes or so of marine seafood caught each year globally equates to about eleven kilograms of wild-caught marine seafood per person on the planet. […] What is perfectly clear […] on the basis of theory backed up by real data on marine fish catches, is that marine fisheries are now fully exploited and that there is little if any headroom for increasing the amount of wild-caught fish humans can extract from the oceans to feed a burgeoning human population. […] This conclusion is solidly supported by the increasingly precarious state of global marine fishery resources. The most recent information from the Food and Agriculture Organization of the United Nations (The State of World Fisheries and Aquaculture 2010) shows that over half (53 per cent of all fish stocks are fully exploited – their current catches are at or close to their maximum sustainable levels of production and there is no scope for further expansion. Another 32 per cent are overexploited and in decline. Of the remaining 15 per cent of stocks, 12 per cent are considered moderately exploited and only 3 per cent underexploited. […] in the mid 1970s 40 per cent of all fish stocks were in [the moderately exploited or unexploited] category as opposed to around 15 per cent now. […] the real question is not so much whether we can get more fish from the sea but whether we can sustain the amount of fish we are harvesting at present”.


Atoll. Fringing reef. Barrier reef.
Broadcast spawning.
Acanthaster planci.
Coral bleaching. Ocean acidification.
Avicennia germinans. Pneumatophores. Lenticel.
Photophore. Lanternfish. Anglerfish. Black swallower.
Deep scattering layer. Taylor column.
Hydrothermal vent. Black smokers and white smokers. Chemosynthesis. Siboglinidae.
Intertidal zone. Tides. Tidal range.
Barnacle. Mussel.
Clupeidae. Gadidae. Scombridae.

March 16, 2018 Posted by | Biology, Books, Chemistry, Ecology, Evolutionary biology, Geology | Leave a comment


Almost all the words included in this post are words which I encountered while reading the books The Mauritius Command, Desolation Island and You Don’t Have to Be Evil to Work Here, But it Helps.

Aleatory. Tenesmus. Celerity. Pelisse. Collop. Clem. Aviso. Crapulous. Farinaceous. Parturient. Tormina. Scend. Fascine. Distich. Appetency/appetence. Calipash. Tergiversation. Polypody. Prodigious. Teredo.

Rapacity. Cappabar. Chronometer. Figgy-dowdy. Chamade. Hauteur. Futtock. Obnubilate. Offing. Cleat. Trephine. Promulgate. Hieratic. Cockle. Froward. Aponeurosis. lixiviate. Cupellation. Plaice. Sharper.

Morosity. MephiticGlaucous. Libidinous. Grist. Tilbury. Surplice. Megrim. Cumbrous. Pule. Pintle. Fifer. Roadstead. Quadrumane. Peacoat. Burgher. Cuneate. Tundish. Bung. Fother.

Dégagé. Esculent. Genuflect. Lictor. Drogue. Oakum. Spume. Gudgeon. Firk. Mezzanine. Faff. Manky. Titchy. Sprocket. Conveyancing. Apportionment. Plonker. Flammulated. Cataract. Demersal.


March 15, 2018 Posted by | Books, Language | Leave a comment

Marine Biology (I)

This book was ‘okay’.

Some quotes and links related to the first half of the book below.


“The Global Ocean has come to be divided into five regional oceans – the Pacific, Atlantic, Indian, Arctic, and Southern Oceans […] These oceans are large, seawater-filled basins that share characteristic structural features […] The edge of each basin consists of a shallow, gently sloping extension of the adjacent continental land mass and is term the continental shelf or continental margin. Continental shelves typically extend off-shore to depths of a couple of hundred metres and vary from several kilometres to hundreds of kilometres in width. […] At the outer edge of the continental shelf, the seafloor drops off abruptly and steeply to form the continental slope, which extends down to depths of 2–3 kilometres. The continental slope then flattens out and gives way to a vast expanse of flat, soft, ocean bottom — the abyssal plain — which extends over depths of about 3–5 kilometres and accounts for about 76 per cent of the Global Ocean floor. The abyssal plains are transected by extensive mid-ocean ridges—underwater mountain chains […]. Mid-ocean ridges form a continuous chain of mountains that extend linearly for 65,000 kilometres across the floor of the Global Ocean basins […]. In some places along the edges of the abyssal plains the ocean bottom is cut by narrow, oceanic trenches or canyons which plunge to extraordinary depths — 3–4 kilometres below the surrounding seafloor — and are thousands of kilometres long but only tens of kilometres wide. […] Seamounts are another distinctive and dramatic feature of ocean basins. Seamounts are typically extinct volcanoes that rise 1,000 or more metres above the surrounding ocean but do not reach the surface of the ocean. […] Seamounts generally occur in chains or clusters in association with mid-ocean ridges […] The Global Ocean contains an estimated 100,000 or so seamounts that rise more than 1,000 metres above the surrounding deep-ocean floor. […] on a planetary scale, the surface of the Global Ocean is moving in a series of enormous, roughly circular, wind-driven current systems, or gyres […] These gyres transport enormous volumes of water and heat energy from one part of an ocean basin to another

“We now know that the oceans are literally teeming with life. Viruses […] are astoundingly abundant – there are around ten million viruses per millilitre of seawater. Bacteria and other microorganisms occur at concentrations of around 1 million per millilitre”

“The water in the oceans is in the form of seawater, a dilute brew of dissolved ions, or salts […] Chloride and sodium ions are the predominant salts in seawater, along with smaller amounts of other ions such as sulphate, magnesium, calcium, and potassium […] The total amount of dissolved salts in seawater is termed its salinity. Seawater typically has a salinity of roughly 35 – equivalent to about 35 grams of salts in one kilogram of seawater. […] Most marine organisms are exposed to seawater that, compared to the temperature extremes characteristic of terrestrial environments, ranges within a reasonably moderate range. Surface waters in tropical parts of ocean basins are consistently warm throughout the year, ranging from about 20–27°C […]. On the other hand, surface seawater in polar parts of ocean basins can get as cold as −1.9°C. Sea temperatures typically decrease with depth, but not in a uniform fashion. A distinct zone of rapid temperature transition is often present that separates warm seawater at the surface from cooler deeper seawater. This zone is called the thermocline layer […]. In tropical ocean waters the thermocline layer is a strong, well-defined and permanent feature. It may start at around 100 metres and be a hundred or so metres thick. Sea temperatures above the thermocline can be a tropical 25°C or more, but only 6–7°C just below the thermocline. From there the temperature drops very gradually with increasing depth. Thermoclines in temperate ocean regions are a more seasonal phenomenon, becoming well established in the summer as the sun heats up the surface waters, and then breaking down in the autumn and winter. Thermoclines are generally absent in the polar regions of the Global Ocean. […] As a rule of thumb, in the clearest ocean waters some light will penetrate to depths of 150-200 metres, with red light being absorbed within the first few metres and green and blue light penetrating the deepest. At certain times of the year in temperate coastal seas light may penetrate only a few tens of metres […] In the oceans, pressure increases by an additional atmosphere every 10 metres […] Thus, an organism living at a depth of 100 metres on the continental shelf experiences a pressure ten times greater than an organism living at sea level; a creature living at 5 kilometres depth on an abyssal plain experiences pressures some 500 times greater than at the surface”.

“With very few exceptions, dissolved oxygen is reasonably abundant throughout all parts of the Global Ocean. However, the amount of oxygen in seawater is much less than in air — seawater at 20°C contains about 5.4 millilitres of oxygen per litre of seawater, whereas air at this temperature contains about 210 millilitres of oxygen per litre. The colder the seawater, the more oxygen it contains […]. Oxygen is not distributed evenly with depth in the oceans. Oxygen levels are typically high in a thin surface layer 10–20 metres deep. Here oxygen from the atmosphere can freely diffuse into the seawater […] Oxygen concentration then decreases rapidly with depth and reaches very low levels, sometimes close to zero, at depths of around 200–1,000 metres. This region is referred to as the oxygen minimum zone […] This zone is created by the low rates of replenishment of oxygen diffusing down from the surface layer of the ocean, combined with the high rates of depletion of oxygen by decaying particulate organic matter that sinks from the surface and accumulates at these depths. Beneath the oxygen minimum zone, oxygen content increases again with depth such that the deep oceans contain quite high levels of oxygen, though not generally as high as in the surface layer. […] In contrast to oxygen, carbon dioxide (CO2) dissolves readily in seawater. Some of it is then converted into carbonic acid (H2CO3), bicarbonate ion (HCO3-), and carbonate ion (CO32-), with all four compounds existing in equilibrium with one another […] The pH of seawater is inversely proportional to the amount of carbon dioxide dissolved in it. […] the warmer the seawater, the less carbon dioxide it can absorb. […] Seawater is naturally slightly alkaline, with a pH ranging from about 7.5 to 8.5, and marine organisms have become well adapted to life within this stable pH range. […] In the oceans, carbon is never a limiting factor to marine plant photosynthesis and growth, as it is for terrestrial plants.”

“Since the beginning of the industrial revolution, the average pH of the Global Ocean has dropped by about 0.1 pH unit, making it 30 per cent more acidic than in pre-industrial times. […] As a result, more and more parts of the oceans are falling below a pH of 7.5 for longer periods of time. This trend, termed ocean acidification, is having profound impacts on marine organisms and the overall functioning of the marine ecosystem. For example, many types of marine organisms such as corals, clams, oysters, sea urchins, and starfish manufacture external shells or internal skeletons containing calcium carbonate. When the pH of seawater drops below about 7.5, calcium carbonate starts to dissolve, and thus the shells and skeletons of these organisms begin to erode and weaken, with obvious impacts on the health of the animal. Also, these organisms produce their calcium carbonate structures by combining calcium dissolved in seawater with carbonate ion. As the pH decreases, more of the carbonate ions in seawater become bound up with the increasing numbers of hydrogen ions, making fewer carbonate ions available to the organisms for shell-forming purposes. It thus becomes more difficult for these organisms to secrete their calcium carbonate structures and grow.”

“Roughly half of the planet’s primary production — the synthesis of organic compounds by chlorophyll-bearing organisms using energy from the sun—is produced within the Global Ocean. On land the primary producers are large, obvious, and comparatively long-lived — the trees, shrubs, and grasses characteristic of the terrestrial landscape. The situation is quite different in the oceans where, for the most part, the primary producers are minute, short-lived microorganisms suspended in the sunlit surface layer of the oceans. These energy-fixing microorganisms — the oceans’ invisible forest — are responsible for almost all of the primary production in the oceans. […] A large amount, perhaps 30-50 per cent, of marine primary production is produced by bacterioplankton comprising tiny marine photosynthetic bacteria ranging from about 0.5 to 2 μm in size. […] light availability and the strength of vertical mixing are important factors limiting primary production in the oceans. Nutrient availability is the other main factor limiting the growth of primary producers. One important nutrient is nitrogen […] nitrogen is a key component of amino acids, which are the building blocks of proteins. […] Photosynthetic marine organisms also need phosphorus, which is a requirement for many important biological functions, including the synthesis of nucleic acids, a key component of DNA. Phosphorus in the oceans comes naturally from the erosion of rocks and soils on land, and is transported into the oceans by rivers, much of it in the form of dissolved phosphate (PO43−), which can be readily absorbed by marine photosynthetic organisms. […] Inorganic nitrogen and phosphorus compounds are abundant in deep-ocean waters. […] In practice, inorganic nitrogen and phosphorus compounds are not used up at exactly the same rate. Thus one will be depleted before the other and becomes the limiting nutrient at the time, preventing further photosynthesis and growth of marine primary producers until it is replenished. Nitrogen is often considered to be the rate-limiting nutrient in most oceanic environments, particularly in the open ocean. However, in coastal waters phosphorus is often the rate-limiting nutrient.”

“The overall pattern of primary production in the Global Ocean depends greatly on latitude […] In polar oceans primary production is a boom-and-bust affair driven by light availability. Here the oceans are well mixed throughout the year so nutrients are rarely limiting. However, during the polar winter there is no light, and thus no primary production is taking place. […] Although limited to a short seasonal pulse, the total amount of primary production can be quite high, especially in the polar Southern Ocean […] In tropical open oceans, primary production occurs at a low level throughout the year. Here light is never limiting but the permanent tropical thermocline prevents the mixing of deep, nutrient-rich seawater with the surface waters. […] open-ocean tropical waters are often referred to as ‘marine deserts’, with productivity […] comparable to a terrestrial desert. In temperate open-ocean regions, primary productivity is linked closely to seasonal events. […] Although occurring in a number of pulses, primary productivity in temperate oceans [is] similar to [that of] a temperate forest or grassland. […] Some of the most productive marine environments occur in coastal ocean above the continental shelves. This is the result of a phenomenon known as coastal upwelling which brings deep, cold, nutrient-rich seawater to the ocean surface, creating ideal conditions for primary productivity […], comparable to a terrestrial rainforest or cultivated farmland. These hotspots of marine productivity are created by wind acting in concert with the planet’s rotation. […] Coastal upwelling can occur when prevailing winds move in a direction roughly parallel to the edge of a continent so as to create offshore Ekman transport. Coastal upwelling is particularly prevalent along the west coasts of continents. […] Since coastal upwelling is dependent on favourable winds, it tends to be a seasonal or intermittent phenomenon and the strength of upwelling will depend on the strength of the winds. […] Important coastal upwelling zones around the world include the coasts of California, Oregon, northwest Africa, and western India in the northern hemisphere; and the coasts of Chile, Peru, and southwest Africa in the southern hemisphere. These regions are amongst the most productive marine ecosystems on the planet.”

“Considering the Global Ocean as a whole, it is estimated that total marine primary production is about 50 billion tonnes of carbon per year. In comparison, the total production of land plants, which can also be estimated using satellite data, is estimated at around 52 billion tonnes per year. […] Primary production in the oceans is spread out over a much larger surface area and so the average productivity per unit of surface area is much smaller than on land. […] the energy of primary production in the oceans flows to higher trophic levels through several different pathways of various lengths […]. Some energy is lost along each step of the pathway — on average the efficiency of energy transfer from one trophic level to the next is about 10 per cent. Hence, shorter pathways are more efficient. Via these pathways, energy ultimately gets transferred to large marine consumers such as large fish, marine mammals, marine turtles, and seabirds.”

“…it has been estimated that in the 17th century, somewhere between fifty million and a hundred million green turtles inhabited the Caribbean Sea, but numbers are now down to about 300,000. Since their numbers are now so low, their impact on seagrass communities is currently small, but in the past, green turtles would have been extraordinarily abundant grazers of seagrasses. It appears that in the past, green turtles thinned out seagrass beds, thereby reducing direct competition among different species of seagrass and allowing several species of seagrass to coexist. Without green turtles in the system, seagrass beds are generally overgrown monocultures of one dominant species. […] Seagrasses are of considerable importance to human society. […] It is therefore of great concern that seagrass meadows are in serious decline globally. In 2003 it was estimated that 15 per cent of the planet’s existing seagrass beds had disappeared in the preceding ten years. Much of this is the result of increasing levels of coastal development and dredging of the seabed, activities which release excessive amounts of sediment into coastal waters which smother seagrasses. […] The number of marine dead zones in the Global Ocean has roughly doubled every decade since the 1960s”.

“Sea ice is habitable because, unlike solid freshwater ice, it is a very porous substance. As sea ice forms, tiny spaces between the ice crystals become filled with a highly saline brine solution resistant to freezing. Through this process a three-dimensional network of brine channels and spaces, ranging from microscopic to several centimetres in size, is created within the sea ice. These channels are physically connected to the seawater beneath the ice and become colonized by a great variety of marine organisms. A significant amount of the primary production in the Arctic Ocean, perhaps up to 50 per cent in those areas permanently covered by sea ice, takes place in the ice. […] Large numbers of zooplanktonic organisms […] swarm about on the under surface of the ice, grazing on the ice community at the ice-seawater interface, and sheltering in the brine channels. […] These under-ice organisms provide the link to higher trophic levels in the Arctic food web […] They are an important food source for fish such as Arctic cod and glacial cod that graze along the bottom of the ice. These fish are in turn fed on by squid, seals, and whales.”

“[T]he Antarctic marine system consists of a ring of ocean about 10° of latitude wide – roughly 1,000 km. […] The Arctic and Antarctic marine systems can be considered geographic opposites. In contrast to the largely landlocked Arctic Ocean, the Southern Ocean surrounds the Antarctic continental land mass and is in open contact with the Atlantic, Indian, and Pacific Oceans. Whereas the Arctic Ocean is strongly influenced by river inputs, the Antarctic continent has no rivers, and so hard-bottomed seabed is common in the Southern Ocean, and there is no low-saline surface layer, as in the Arctic Ocean. Also, in contrast to the Arctic Ocean with its shallow, broad continental shelves, the Antarctic continental shelf is very narrow and steep. […] Antarctic waters are extremely nutrient rich, fertilized by a permanent upwelling of seawater that has its origins at the other end of the planet. […] This continuous upwelling of cold, nutrient-rich seawater, in combination with the long Antarctic summer day length, creates ideal conditions for phytoplankton growth, which drives the productivity of the Antarctic marine system. As in the Arctic, a well-developed sea-ice community is present. Antarctic ice algae are even more abundant and productive than in the Arctic Ocean because the sea ice is thinner, and there is thus more available light for photosynthesis. […] Antarctica’s most important marine species [is] the Antarctic krill […] Krill are very adept at surviving many months under starvation conditions — in the laboratory they can endure more than 200 days without food. During the winter months they lower their metabolic rate, shrink in body size, and revert back to a juvenile state. When food once again becomes abundant in the spring, they grow rapidly […] As the sea ice breaks up they leave the ice and begin feeding directly on the huge blooms of free-living diatoms […]. With so much food available they grow and reproduce quickly, and start to swarm in large numbers, often at densities in excess of 10,000 individuals per cubic metre — dense enough to colour the seawater a reddish-brown. Krill swarms are patchy and vary greatly in size […] Because the Antarctic marine system covers a large area, krill numbers are enormous, estimated at about 600 billion animals on average, or 500 million tonnes of krill. This makes Antarctic krill one of the most abundant animal species on the planet […] Antarctic krill are the main food source for many of Antarctica’s large marine animals, and a key link in a very short and efficient food chain […]. Krill comprise the staple diet of icefish, squid, baleen whales, leopard seals, fur seals, crabeater seals, penguins, and seabirds, including albatross. Thus, a very simple and efficient three-step food chain is in operation — diatoms eaten by krill in turn eaten by a suite of large consumers — which supports the large numbers of large marine animals living in the Southern Ocean.”


Ocean gyre. North Atlantic Gyre. Thermohaline circulation. North Atlantic Deep Water. Antarctic bottom water.
Cyanobacteria. Diatom. Dinoflagellate. Coccolithophore.
Trophic level.
Nitrogen fixation.
High-nutrient, low-chlorophyll regions.
Light and dark bottle method of measuring primary productivity. Carbon-14 method for estimating primary productivity.
Ekman spiral.
Peruvian anchoveta.
El Niño. El Niño–Southern Oscillation.
Dissolved organic carbon. Particulate organic matter. Microbial loop.
Kelp forest. Macrocystis. Sea urchin. Urchin barren. Sea otter.
Green sea turtle.
Demersal fish.
Eutrophication. Harmful algal bloom.
Comb jelly. Asterias amurensis.
Great Pacific garbage patch.
Eelpout. Sculpin.
Crabeater seal.
Adélie penguin.
Anchor ice mortality.


March 13, 2018 Posted by | Biology, Books, Botany, Chemistry, Ecology, Geology, Zoology | Leave a comment


The words included in this post are words which I encountered while reading Patrick O’Brian’s books Post Captain and HMS Surprise. As was also the case the last time I posted one of these posts, I had to include ~100 words, instead of the ~80 I have come to consider ‘the standard’ for these posts, in order to include all the words of interest which I encountered in the books.

MésallianceMansuetude. Wen. Raffish. Stave. Gorse. Lurcher. Improvidence/improvident. Sough. Bowse. Mump. Jib. Tipstaff. Squalid. Strum. Hussif. Dowdy. Cognoscent. Footpad. Quire.

Vacillation. Wantonness. Escritoire/scrutoire. Mantua. Shindy. Vinous. Top-hamper. Holystone. Keelson. Bollard/bitts. Wicket. Paling. Brace (sailing). Coxcomb. Foin. Stern chaser. Galliot. Postillion. Coot. Fanfaronade.

Malversation. Arenaceous. Tope. Shebeen. Lithotomy. Quoin/coign. Mange. Curricle. Cockade. Spout. Bistoury. Embrasure. Acushla. Circumambulation. Glabrous. Impressment. Transpierce. Dilatoriness. Conglobate. Murrain.

Anfractuous/anfractuosity. Conversible. Tunny. Weevil. Posset. Sponging-house. Salmagundi. Hugger-mugger. Euphroe. Jobbery. Dun. Privity. Intension. Shaddock. Catharpin. Peccary. Tarpaulin. Frap. Bombinate. Spirketing.

Glacis. Gymnosophist. Fibula. Dreary. Barouche. Syce. Carmine. Lustration. Rood. Timoneer. Crosstrees. Luff. Mangosteeen. Methitic. Superfetation. Pledget. Innominate. Jibboom. Pilau. Ataraxy.


February 27, 2018 Posted by | Books, Language | Leave a comment

The Ice Age (I)

I’m currently reading this book. Some observations and links related to the first half of the book below:

“It is important to appreciate from the outset that the Quaternary ice age was not one long episode of unremitting cold climate. […] By exploring the landforms, sediments, and fossils of the Quaternary Period we can identify glacials: periods of severe cold climate when great ice sheets formed in the high middle latitudes of the northern hemisphere and glaciers and ice caps advanced in mountain regions around the world. We can also recognize periods of warm climate known as interglacials when mean air temperatures in the middle latitudes were comparable to, and sometimes higher than, those of the present. As the climate shifted from glacial to interglacial mode, the large ice sheets of Eurasia and North America retreated allowing forest biomes to re-colonize the ice free landscapes. It is also important to recognize that the ice age isn’t just about advancing and retreating ice sheets. Major environmental changes also took place in the Mediterranean region and in the tropics. The Sahara, for example, became drier, cooler, and dustier during glacial periods yet early in the present interglacial it was a mosaic of lakes and oases with tracts of lush vegetation. A defining feature of the Quaternary Period is the repeated fluctuation in climate as conditions shifted from glacial to interglacial, and back again, during the course of the last 2.5 million years or so. A key question in ice age research is why does the Earth’s climate system shift so dramatically and so frequently?”

“Today we have large ice masses in the Polar Regions, but a defining feature of the Quaternary is the build-up and decay of continental-scale ice sheets in the high middle latitudes of the northern hemisphere. […] the Laurentide and Cordilleran ice sheets […] covered most of Canada and large tracts of the northern USA during glacial stages. Around 22,000 years ago, when the Laurentide ice sheet reached its maximum extent during the most recent glacial stage, it was considerably larger in both surface area and volume (34.8 million km3) than the present-day East and West Antarctic ice sheets combined (27 million km3). With a major ice dome centred on Hudson Bay greater than 4 km thick, it formed the largest body of ice on Earth. This great mass of ice depressed the crust beneath its bed by many hundreds of metres. Now shed of this burden, the crust is still slowly recovering today at rates of up to 1 cm per year. Glacial ice extended out beyond the 38th parallel across the lowland regions of North America. Chicago, Boston, and New York all lie on thick glacial deposits left by the Laurentide ice sheet. […] With huge volumes of water locked up in the ice sheets, global sea level was about 120 m lower than present at the Last Glacial Maximum (LGM), exposing large expanses of continental shelf and creating land bridges that allowed humans, animals, and plants to move between continents. Migration from eastern Russia to Alaska, for example, was possible via the Bering land bridge.”

“Large ice sheets also developed in Europe. […] The British Isles lie in an especially sensitive location on the Atlantic fringe of Europe between latitudes 50 and 60° north. Because of this geography, the Quaternary deposits of Britain record especially dramatic shifts in environmental conditions. The most extensive glaciation saw ice sheets extend as far south as the Thames Valley with wide braided rivers charged with meltwater and sediment from the ice margin. Beyond the glacial ice much of southern Britain would have been a treeless, tundra steppe environment with tracts of permanently frozen ground […]. At the LGM […] [t]he Baltic and North Seas were dry land and Britain was connected to mainland Europe. Beyond the British and Scandinavian ice sheets, much of central and northern Europe was a treeless tundra steppe habitat. […] During warm interglacial stages […] [b]road-leaved deciduous woodland with grassland was the dominant vegetation […]. In the warmest parts of interglacials thermophilous […] insects from the Mediterranean were common in Britain whilst the large mammal fauna of the Last Interglacial (c.130,000 to 115,000 years ago) included even more exotic species such as the short tusked elephant, rhinoceros, and hippopotamus. In some interglacials, the rivers of southern Britain contained molluscs that now live in the Nile Valley. For much of the Quaternary, however, climate would have been in an intermediate state (either warming or cooling) between these glacial and interglacial extremes.”

“Glaciologists make a distinction between three main types of glacier (valley glaciers, ice caps, and ice sheets) on the basis of scale and topographic setting. A glacier is normally constrained by the surrounding topography such as a valley and has a clearly defined source area. An ice cap builds up as a dome-like form on a high plateau or mountain peak and may feed several outlet glaciers to valleys below. Ice sheets notionally exceed 50,000 km2 and are not constrained by topography.”

“We live in unusual times. For more than 90 per cent of its 4.6-billion-year history, Earth has been too warm — even at the poles — for ice sheets to form. Ice ages are not the norm for our planet. Periods of sustained (over several million years) large-scale glaciation can be called glacial epochs. Tillites in the geological record tells us that the Quaternary ice age is just one of at least six great glacial epochs that have taken place over the last three billion years or so […]. The Quaternary itself is the culmination of a much longer glacial epoch that began around 35 million years ago (Ma) when glaciers and ice sheets first formed in Antarctica. This is known as the Cenozoic glacial epoch. There is still much to learn about these ancient glacial epochs, especially the so-called Snowball Earth states of the Precambrian (before 542 Ma) when the boundary conditions for the global climate system were so different to those of today. […] This book is concerned with the Quaternary ice age – it has the richest and most varied records of environmental change. Because its sediments are so recent they have not been subjected to millions of years of erosion or deep burial and metamorphism. […] in aquatic settings, such as lakes and peat bogs, organic materials such as insects, leaves, and seeds, as well as microfossils such as pollen and fungal spores can be exceptionally well preserved in the fossil record. This allows us to create very detailed pictures of past ecosystems under glacial and interglacial conditions. This field of research is known as Quaternary paeloecology.”

“An erratic […] is a piece of rock that has been transported from its place of origin. […] Many erratics stand out because they lie on bedrock that is very different to their source. […] Erratics are normally associated with transport by glaciers or ice sheets, but in the early 19th century mechanisms such as the great deluge or rafting on icebergs were commonly invoked. […] Enormous erratic boulders […] were well known to 18th- and 19th-centery geologists. […] Their origin was a source of lively and protracted debate […] Early observers of Alpine glaciers had noted the presence of large boulders on the surface of active glaciers or forming part of the debris pile at the glacier snout. These were readily explainable, but erratic boulders had long been noted in locations that defied rational explanations. The erratics found at elevations far above their known sources, and in places such as Britain where glaciers were absent, were especially problematic for early students of landscape history. […] A huge deluge […] was commonly invoked to explain the disposition of such boulders and many saw them as more hard evidence in support of the Biblical flood. […] At this time, the Church of England held a strong influence over much of higher education and especially so in Cambridge and Oxford.”

Venetz [in the early 19th century] produced remarkably detailed topographic maps of lateral and terminal moraines that lay far down valley of the modern glaciers. He was able to show that many glaciers had advanced and retreated in the historical period. His was the first systematic analysis of climate-glacier-landscape interactions. […] In 1821, Venetz presented his findings to the Société Helvétiques des Sciences Naturelles, setting out Perraudin’s ideas alongside his own. The paper had little impact, however, and would not see publication until 1833. […] Jean de Charpentier [in his work] paid particular attention to the disposition of large erratic blocks and the occurrence of polished and striated bedrock surfaces in the deep valleys of western Switzerland. A major step forward was Charpentier’s recognition of a clear relationship between the elevation of the erratic blocks in the Rhône Valley and the vertical extent of glacially smoothed rock walls. He noted that the bedrock valley sides above the erratic blocks were not worn smooth because they must have been above the level of the ancient glacier surface. The rock walls below the erratics always bore the hallmarks of contact with glacial ice. We call this boundary the trimline. It is often clearly marked in hard bedrock because the texture of the valley sides above the glacier surface is fractured due to attack by frost weathering. The detachment of rock particles above the trimline adds debris to lateral moraines and the glacier surface. These insights allowed Charpentier to reconstruct the vertical extent of former glaciers for the first time. Venetz and Perraudin had already shown how to demarcate the length and width of glaciers using the terminal and lateral moraines in these valleys. Charpentier described some of the most striking erratic boulders in the Alps […]. As Charpentier mapped the giant erratics, polished bedrock surfaces, and moraines in the Rhône Valley, it became clear to him that the valley must once have been occupied by a truly enormous glacier or ‘glacier-monstre’ as he called it. […] In 1836, Charpentier published a key paper setting out the main findings of their [his and Venetz’] glacial work”.

“Even before Charpentier was thinking about large ice masses in Switzerland, Jens Esmark (1763-1839) […] had suggested that northern European glaciers had been much more extensive in the past and were responsible for the transport of large erratic boulders and the formation of moraines. Esmark also recognized the key role of deep bedrock erosion by glacial ice in the formation of the spectacular Norwegian fjords. He worked out that glaciers in Norway had once extended down to sea level. Esmark’s ideas were […] translated into English and published […] in 1826, a decade in advance of Charpentier’s paper. Esmark discussed a large body of evidence pointing to an extensive glaciation of northern Europe. […] his thinking was far in advance of his contemporaries […] Unfortunately, even Esmark’s carefully argued paper held little sway in Britain and elsewhere […] it would be many decades before there was general acceptance within the geological community that glaciers could spread out across low gradient landscapes. […] in the lecture theatres and academic societies of Paris, Berlin, and London, the geological establishment was slow to take up these ideas, even though they were published in both English and French and were widely available. Much of the debate in the 1820s and early 1830s centred on the controversy over the evolution of valleys between the fluvialists (Hutton, Playfair, and others), who advocated slow river erosion, and the diluvialists (Buckland, De la Beche, and others) who argued that big valleys and large boulders needed huge deluges. The role of glaciers in valley and fjord formation was not considered. […] The key elements of a glacial theory were in place but nobody was listening. […] It would be decades before a majority accepted that vast tracts of Eurasia and North America had once been covered by mighty ice sheets.”

“Most geologists in 1840 saw Agassiz’s great ice sheet as a retrograde step. It was just too catastrophist — a blatant violation of hard-won uniformitarian principles. It was the antithesis of the new rational geology and was not underpinned by carefully assembled field data. So, for many, as an explanation for the superficial deposits of the Quaternary, it was no more convincing than the deluge. […] Ancient climates were [also] supposed to be warmer not colder. The suggestion of a freezing glacial epoch in the recent geological past, followed by the temperate climate of the present, still jarred with the conventional wisdom that Earth history, from its juvenile molten state to the present, was an uninterrupted record of long-term cooling without abrupt change. Lyell’s drift ice theory [according to which erratics (and till) had been transported by icebergs drifting in water, instead of glaciers transporting the material over land – US] also provided an attractive alternative to Agassiz’s ice age because it did not demand a period of cold glacial climate in areas that now enjoy temperate conditions. […] If anything, the 1840 sessions at the Geological Society had galvanized support for floating ice as a mechanism for drift deposition in the lowlands. Lyell’s model proved to be remarkably resilient—its popularity proved to be the major obstacle to the wider adoption of the land ice theory. […] many refused to believe that glacier ice could advance across gently sloping lowland terrain. This was a reasonable objection at this time since the ice sheets of Greenland and Antarctica had not yet been investigated from a glaciological point of view. It is not difficult to understand why many British geologists rejected the glacial theory when the proximity and potency of the sea was so obvious and nobody knew how large ice sheets behaved.”

Hitchcock […] was one of the first Americans to publicly embrace Agassiz’s ideas […] but he later stepped back from a full endorsement, leaving a role for floating ice. This hesitant beginning set the tone for the next few decades in North America as its geologists began to debate whether they could see the work of ice sheets or icebergs. There was a particularly strong tradition of scriptural geology in 19th-century North America. Its practitioners attempted to reconcile their field observations with the Bible and there were often close links with like-minded souls in Britain. […] If the standing of Lyell extended the useful lifespan of the iceberg theory, it was gradually worn down by a growing body of field evidence from Europe and North America that pointed to the action of glacier ice. […] The continental glacial theory prevailed in North America because it provided a much better explanation for the vast majority of the features recorded in the landscape. The striking regularity and fixed alignment of many features could not be the work of icebergs whose wanderings were governed by winds and ocean currents. The southern limit of the glacial deposits is often marked by pronounced ridges in an otherwise low-relief landscape. These end moraines mark the edge of the former ice sheet and they cannot be formed by floating ice. It took a long time to put all the pieces of evidence together in North America because of the vast scale of the territory to be mapped. Once the patterns of erratic dispersal, large-scale scratching of bedrock, terminal moraines, drumlin fields, and other features were mapped, their systematic arrangement argued strongly against the agency of drifting ice. Unlike their counterparts in Britain, who were never very far from the sea, geologists working deep in the continental interior of North America found it much easier to dismiss the idea of a great marine submergence. Furthermore, icebergs just did not transport enough sediment to account for the enormous extent and great thickness of the Quaternary deposits. It was also realized that icebergs were just not capable of planing off hard bedrock to create plateau surfaces. Neither were they able to polish, scratch, or cut deep grooves into ancient bedrock. All these features pointed to the action of land-based glacial ice. Slowly, but surely, the reality of vast expanses of glacier ice covering much of Canada and the northern states of the USA became apparent.”


The Parallel Roads of Glen Roy.
William Boyd Dawkins.
Adams mammoth.
Georges Cuvier.
Cirque (geology). Arête. Tarn. Moraine. Drumlin. Till/Tillite. Glacier morphology.
James Hutton.
William Buckland.
Charles Lyell.
Giétro Glacier.
Cwm Idwal.
Timothy Abbott Conrad. Charles Whittlesey. James Dwight Dana.


February 23, 2018 Posted by | Books, Ecology, Geography, Geology, History, Paleontology | Leave a comment

Endocrinology (part 5 – calcium and bone metabolism)

Some observations from chapter 6:

“*Osteoclasts – derived from the monocytic cells; resorb bone. *Osteoblasts – derived from the fibroblast-like cells; make bone. *Osteocytes – buried osteoblasts; sense mechanical strain in bone. […] In order to ensure that bone can undertake its mechanical and metabolic functions, it is in a constant state of turnover […] Bone is laid down rapidly during skeletal growth at puberty. Following this, there is a period of stabilization of bone mass in early adult life. After the age of ~40, there is a gradual loss of bone in both sexes. This occurs at the rate of approximately 0.5% annually. However, in ♀ after the menopause, there is a period of rapid bone loss. The accelerated loss is maximal in the first 2-5 years after the cessation of ovarian function and then gradually declines until the previous gradual rate of loss is once again established. The excess bone loss associated with the menopause is of the order of 10% of skeletal mass. This menopause-associated loss, coupled with higher peak bone mass acquisition in ♂, largely explains why osteoporosis and its associated fractures are more common in ♀.”

“The clinical utility of routine measurements of bone turnover markers is not yet established. […] Skeletal radiology[:] *Useful for: *Diagnosis of fracture. *Diagnosis of specific diseases (e.g. Paget’s disease and osteomalacia). *Identification of bone dysplasia. *Not useful for assessing bone density. […] Isotope bone scans are useful for identifying localized areas of bone disease, such as fracture, metastases, or Paget’s disease. […] Isotope bone scans are particularly useful in Paget’s disease to establish the extent and sites of skeletal involvement and the underlying disease activity. […] Bone biopsy is occasionally necessary for the diagnosis of patients with complex metabolic bone diseases. […] Bone biopsy is not indicated for the routine diagnosis of osteoporosis. It should only be undertaken in highly specialist centres with appropriate expertise. […] Measurement of 24h urinary excretion of calcium provides a measure of risk of renal stone formation or nephrocalcinosis in states of chronic hypercalcaemia. […] 250H vitamin D […] is the main storage form of vitamin D, and the measurement of ‘total vitamin D’ is the most clinically useful measure of vitamin D status. Internationally, there remains controversy around a ‘normal’ or ‘optimal’ concentration of vitamin D. Levels over 50nmol/L are generally accepted as satisfactory and values <25nmol/L representing deficiency. True osteomalacia occurs with vitamin D values <15 nmol/L. Low levels of 250HD can result from a variety of causes […] Bone mass is quoted in terms of the number of standard deviations from an expected mean. […] A reduction of one SD in bone density will approximately double the risk of fracture.”

[I should perhaps add a cautionary note here that while this variable is very useful in general, it is more useful in some contexts than in others; and in some specific disease process contexts it is quite clear that it will tend to underestimate the fracture risk. Type 1 diabetes is a clear example. For more details, see this post.]

“Hypercalcaemia is found in 5% of hospital patients and in 0.5% of the general population. […] Many different disease states can lead to hypercalcaemia. […] In asymptomatic community-dwelling subjects, the vast majority of hypercalcaemia is the result of hyperparathyroidism. […] The clinical features of hypercalcaemia are well recognized […]; unfortunately, they are non-specific […] [They include:] *Polyuria. *Polydipsia. […] *Anorexia. *Vomiting. *Constipation. *Abdominal pain. […] *Confusion. *Lethargy. *Depression. […] Clinical signs of hypercalcaemia are rare. […] the presence of bone pain or fracture and renal stones […] indicate the presence of chronic hypercalcaemia. […] Hypercalcaemia is usually a late manifestation of malignant disease, and the primary lesion is usually evident by the time hypercalcaemia is expressed (50% of patients die within 30 days).”

“Primary hyperparathyroidism [is] [p]resent in up to 1 in 500 of the general population where it is predominantly a disease of post-menopausal ♀ […] The normal physiological response to hypocalcaemia is an increase in PTH secretion. This is termed 2° hyperparathyroidism and is not pathological in as much as the PTH secretion remains under feedback control. Continued stimulation of the parathyroid glands can lead to autonomous production of PTH. This, in turn, causes hypercalcaemia which is termed tertiary hyperparathyroidism. This is usually seen in the context of renal disease […] In majority of patients [with hyperparathyroidism] without end-organ damage, disease is benign and stable. […] Investigation is, therefore, primarily aimed at determining the presence of end-organ damage from hypercalcaemia in order to determine whether operative intervention is indicated. […] It is generally accepted that all patients with symptomatic hyperparathyroidism or evidence of end-organ damage should be considered for parathyroidectomy. This would include: *Definite symptoms of hypercalcaemia. […] *Impaired renal function. *Renal stones […] *Parathyroid bone disease, especially osteitis fibrosis cystica. *Pancreatitis. […] Patients not managed with surgery require regular follow-up. […] <5% fail to become normocalcaemic [after surgery], and these should be considered for a second operation. […] Patients rendered permanently hypoparathyroid by surgery require lifelong supplements of active metabolites of vitamin D with calcium. This can lead to hypercalciuria, and the risk of stone formation may still be present in these patients. […] In hypoparathyroidism, the target serum calcium should be at the low end of the reference range. […] any attempt to raise the plasma calcium well into the normal range is likely to result in unacceptable hypercalciuria”.

“Although hypocalcaemia can result from failure of any of the mechanisms by which serum calcium concentration is maintained, it is usually the result of either failure of PTH secretion or because of the inability to release calcium from bone. […] The clinical features of hypocalcaemia are largely as a result of neuromuscular excitability. In order of  severity, these include: *Tingling – especially of fingers, toes, or lips. *Numbness – especially of fingers, toes, or lips. *Cramps. *Carpopedal spasm. *Stridor due to laryngospasm. *Seizures. […] symptoms of hypocalcaemia tend to reflect the severity and rapidity of onset of the metabolic abnormality. […] there may be clinical signs and symptoms associated with the underlying condition: *Vitamin D deficiency may be associated with generalized bone pain, fractures, or proximal myopathy […] *Hypoparathyroidism can be accompanied by mental slowing and personality disturbances […] *If hypocalcaemia is present during the development of permanent teeth, these may show areas of enamel hypoplasia. This can be a useful physical sign, indicating that the hypocalcaemia is long-standing. […] Acute symptomatic hypocalcaemia is a medical emergency and demands urgent treatment whatever the cause […] *Patients with tetany or seizures require urgent IV treatment with calcium gluconate […] Care must be taken […] as too rapid elevation of the plasma calcium can cause arrhythmias. […] *Treatment of chronic hypocalcaemia is more dependent on the cause. […] In patients with mild parathyroid dysfunction, it may be possible to achieve acceptable calcium concentrations by using calcium supplements alone. […] The majority of patients will not achieve adequate control with such treatment. In those cases, it is necessary to use vitamin D or its metabolites in pharmacological doses to maintain plasma calcium.”

“Pseudohypoparathyroidism[:] *Resistance to parathyroid hormone action. *Due to defective signalling of PTH action via cell membrane receptor. *Also affects TSH, LH, FSH, and GH signalling. […] Patients with the most common type of pseudohypoparathyroidism (type 1a) have a characteristic set of skeletal abnormalities, known as Albright’s hereditary osteodystrophy. This comprises: *Short stature. *Obesity. *Round face. *Short metacarpals. […] The principles underlying the treatment of pseudohypoparathyroidism are the same as those underlying hypoparathyroidism. *Patients with the most common form of pseudohypoparathyroidism may have resistance to the action of other hormones which rely on G protein signalling. They, therefore, need to be assessed for thyroid and gonadal dysfunction (because of defective TSH and gonadotrophin action). If these deficiencies are present, they need to be treated in the conventional manner.”

“Osteomalacia occurs when there is inadequate mineralization of mature bone. Rickets is a disorder of the growing skeleton where there is inadequate mineralization of bone as it is laid down at the epiphysis. In most instances, osteomalacia leads to build-up of excessive unmineralized osteoid within the skeleton. In rickets, there is build-up of unmineralized osteoid in the growth plate. […] These two related conditions may coexist. […] Clinical features [of osteomalacia:] *Bone pain. *Deformity. *Fracture. *Proximal myopathy. *Hypocalcaemia (in vitamin D deficiency). […] The majority of patients with osteomalacia will show no specific radiological abnormalities. *The most characteristic abnormality is the Looser’s zone or pseudofracture. If these are present, they are virtually pathognomonic of osteomalacia. […] Oncogenic osteomalacia[:] Certain tumours appear to be able to produce FGF23 which is phosphaturic. This is rare […] Clinically, such patients usually present with profound myopathy as well as bone pain and fracture. […] Complete removal of the tumour results in resolution of the biochemical and skeletal abnormalities. If this is not possible […], treatment with vitamin D metabolites and phosphate supplements […] may help the skeletal symptoms.”

Hypophosphataemia[:] Phosphate is important for normal mineralization of bone. In the absence of sufficient phosphate, osteomalacia results. […] In addition, phosphate is important in its own right for neuromuscular function, and profound hypophosphataemia can be accompanied by encephalopathy, muscle weakness, and cardiomyopathy. It must be remembered that, as phosphate is primarily an intracellular anion, a low plasma phosphate does not necessarily represent actual phosphate depletion. […] Mainstay [of treatment] is phosphate replacement […] *Long-term administration of phosphate supplements stimulates parathyroid activity. This can lead to hypercalcaemia, a further fall in phosphate, with worsening of the bone disease […] To minimize parathyroid stimulation, it is usual to give one of the active metabolites of vitamin D in conjunction with phosphate.”

“Although the term osteoporosis refers to the reduction in the amount of bony tissue within the skeleton, this is generally associated with a loss of structural integrity of the internal architecture of the bone. The combination of both these changes means that osteoporotic bone is at high risk of fracture, even after trivial injury. […] Historically, there has been a primary reliance on bone mineral density as a threshold for treatment, whereas currently there is far greater emphasis on assessing individual patients’ risk of fracture that incorporates multiple clinical risk factors as well as bone mineral density. […] Osteoporosis may arise from a failure of the body to lay down sufficient bone during growth and maturation; an earlier than usual onset of bone loss following maturity; or an rate of that loss. […] Early menopause or late puberty (in ♂ or ♀) is associated with risk of osteoporosis. […] Lifestyle factors affecting bone mass [include:] *weight-bearing exercise [increase bone mass] […] *Smoking. *Excessive alcohol. *Nulliparity. *Poor calcium nutrition. [These all decrease bone mass] […] The risk of osteoporotic fracture increases with age. Fracture rates in ♂ are approximately half of those seen in ♀ of the same age. An ♀ aged 50 has approximately a 1:2 chance [risk, surely… – US] of sustaining an osteoporotic fracture in the rest of her life. The corresponding figure for a ♂ is 1:5. […] One-fifth of hip fracture victims will die within 6 months of the injury, and only 50% will return to their previous level of independence.”

“Any fracture, other than those affecting fingers, toes, or face, which is caused by a fall from standing height or less is called a fragility (low-trauma) fracture, and underlying osteoporosis should be considered. Patients suffering such a fracture should be considered for investigation and/or treatment for osteoporosis. […] [Osteoporosis is] [u]sually clinically silent until an acute fracture. *Two-thirds of vertebral fractures do not come to clinical attention. […] Osteoporotic vertebral fractures only rarely lead to neurological impairment. Any evidence of spinal cord compression should prompt a search for malignancy or other underlying cause. […] Osteoporosis does not cause generalized skeletal pain. […] Biochemical markers of bone turnover may be helpful in the calculation of fracture risk and in judging the response to drug therapies, but they have no role in the diagnosis of osteoporosis. […] An underlying cause for osteoporosis is present in approximately 10-30% of women and up to 50% of men with osteoporosis. […] 2° causes of osteoporosis are more common in ♂ and need to be excluded in all ♂ with osteoporotic fracture. […] Glucocorticoid treatment is one of the major 2° causes of osteoporosis.”


February 22, 2018 Posted by | Books, Cancer/oncology, Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Pharmacology | Leave a comment


The words below are mostly words I encountered while reading Wolfe’s The Claw of the Conciliator and O’Brian’s Master and Commander. I wanted to finish off my ‘coverage’ of those books here, so I decided to include a few more words than usual (the post includes ~100 words, instead of the usual ~80).

Threnody. Noctilucent. Dell. Cariole. Rick. Campanile. Obeisance. Cerbotana. Caloyer. Mitre. Orpiment. Tribade/tribadism (NSFW words?). Thiasus. Argosy. Partridge. Cenotaph. Seneschal. Ossifrage. Faille. Calotte.

Meretrice. Bijou. Espalier. Gramary. Jennet. Algophilia/algophilist. Clerestory. Liquescent. Pawl. Lenitive. Bream. Bannister. Jacinth. Inimical. Grizzled. Trabacalo. Xebec. Suet. Stanchion. Beadle.

Philomath. Gaby. Purser. Tartan. Eparterial. Otiose. Cryptogam. Puncheon. Neume. Cully. Carronade. Becket. Belay. Capstan. Nacreous. Fug. Cosset. Roborative. Comminatory. Strake.

Douceur. Bowsprit. Orlop. Turbot. Luffing. Sempiternal. Tompion. Loblolly (boy). Felucca. Genet. Steeve. Gremial. Epicene. Quaere. Mumchance. Hance. Divertimento. Halliard. Gleet. Rapparee.

Prepotent. Tramontana. Hecatomb. Inveteracy. Davit. Vaticination/vaticinatory. Trundle. Antinomian. Scunner. Shay. Demulcent. Wherry. Cullion. Hemidemisemiquaver. Cathead. Cordage. Kedge. Clew. Semaphore. Tumblehome.


February 21, 2018 Posted by | Books, Language | Leave a comment

Prevention of Late-Life Depression (II)

Some more observations from the book:

In contrast to depression in childhood and youth when genetic and developmental vulnerabilities play a significant role in the development of depression, the development of late-life depression is largely attributed to its interactions with acquired factors, especially medical illness [17, 18]. An analysis of the WHO World Health Survey indicated that the prevalence of depression among medical patients ranged from 9.3 to 23.0 %, significantly higher than that in individuals without medical conditions [19]. Wells et al. [20] found in the Epidemiologic Catchment Area Study that the risk of developing lifetime psychiatric disorders among individuals with at least one medical condition was 27.9 % higher than among those without medical conditions. […] Depression and disability mutually reinforce the risk of each other, and adversely affect disease progression and prognosis [21, 25]. […] disability caused by medical conditions serves as a risk factor for depression [26]. When people lose their normal sensory, motor, cognitive, social, or executive functions, especially in a short period of time, they can become very frustrated or depressed. Inability to perform daily tasks as before decreases self-esteem, reduces independence, increases the level of psychological stress, and creates a sense of hopelessness. On the other hand, depression increases the risk for disability. Negative interpretation, attention bias, and learned hopelessness of depressed persons may increase risky health behaviors that exacerbate physical disorders or disability. Meanwhile, depression-related cognitive impairment also affects role performance and leads to functional disability [25]. For example, Egede [27] found in the 1999 National Health Interview Survey that the risk of having functional disability among patients with the comorbidity of diabetes and depression were approximately 2.5–5 times higher than those with either depression or diabetes alone. […]  A leading cause of disability among medical patients is pain and pain-related fears […] Although a large proportion of pain complaints can be attributed to physiological changes from physical disorders, psychological factors (e.g., attention, interpretation, and coping skills) play an important role in perception of pain […] Bair et al. [31] indicated in a literature review that the prevalence of pain was higher among depressed patients than non-depressed patients, and the prevalence of major depression was also higher among pain patients comparing to those without pain complaints.”

Alcohol use has more serious adverse health effects on older adults than other age groups, since aging-related physiological changes (e.g. reduced liver detoxification and renal clearance) affect alcohol metabolism, increase the blood concentration of alcohol, and magnify negative consequences. More importantly, alcohol interacts with a variety of frequently prescribed medications potentially influencing both treatment and adverse effects. […] Due to age-related changes in pharmacokinetics and pharmacodynamics, older adults are a vulnerable population to […] adverse drug effects. […] Adverse drug events are frequently due to failure to adjust dosage or to account for drug–drug interactions in older adults [64]. […] Loneliness […] is considered as an independent risk factor for depression [46, 47], and has been demonstrated to be associated with low physical activity, increased cardiovascular risks, hyperactivity of the hypothalamic-pituitary-adrenal axis, and activation of immune response [for details, see Cacioppo & Patrick’s book on these topics – US] […] Hopelessness is a key concept of major depression [54], and also an independent risk factor of suicidal ideation […] Hopelessness reduces expectations for the future, and negatively affects judgment for making medical and behavioral decisions, including non-adherence to medical regimens or engaging in unhealthy behaviors.”

Co-occurring depression and medical conditions are associated with more functional impairment and mortality than expected from the severity of the medical condition alone. For example, depression accompanying diabetes confers increased functional impairment [27], complications of diabetes [65, 66], and mortality [6771]. Frasure-Smith and colleagues highlighted the prognostic importance of depression among persons who had sustained a myocardial infarction (MI), finding that depression was a significant predictor of mortality at both 6 and 18 months post MI [72, 73]. Subsequent follow-up studies have borne out the increased risk conferred by depression on the mortality of patients with cardiovascular disease [10, 74, 75]. Over the course of a 2-year follow-up interval, depression contributed as much to mortality as did myocardial infarction or diabetes, with the population attributable fraction of mortality due to depression approximately 13 % (similar to the attributable risk associated with heart attack at 11 % and diabetes at 9 %) [76]. […] Although the bidirectional relationship between physical disorders and depression has been well known, there are still relatively few randomized controlled trials on preventing depression among medically ill patients. […] Rates of attrition [in post-stroke depression prevention trials has been observed to be] high […] Stroke, acute coronary syndrome, cancer, and other conditions impose a variety of treatment burdens on patients so that additional interventions without direct or immediate clinical effects may not be acceptable [95]. So even with good participation rates, lack of adherence to the intervention might limit effects.”

Late-life depression (LLD) is a heterogeneous disease, with multiple risk factors, etiologies, and clinical features. It has been recognized for many years that there is a significant relationship between the presence of depression and cerebrovascular disease in older adults [1, 2]. This subtype of LLD was eventually termed “vascular depression.” […] There have been a multitude of studies associating white matter abnormalities with depression in older adults using MRI technology to visualize lesions, or what appear as hyperintensities in the white matter on T2-weighted scans. A systematic review concluded that white matter hyperintensities (WMH) are more common and severe among older adults with depression compared to their non-depressed peers [9]. […] WMHs are associated with older age [13] and cerebrovascular risk factors, including diabetes, heart disease, and hypertension [14–17]. White matter severity and extent of WMH volume has been related to the severity of depression in late life [18, 19]. For example, among 639 older, community-dwelling adults, white matter lesion (WML) severity was found to predict depressive episodes and symptoms over a 3-year period [19]. […] Another way of investigating white matter integrity is with diffusion tensor imaging (DTI), which measures the diffusion of water in tissues and allows for indirect evidence of the microstructure of white matter, most commonly represented as fractional anisotropy (FA) and mean diffusivity (MD). DTI may be more sensitive to white matter pathology than is quantification of WMH […] A number of studies have found lower FA in widespread regions among individuals with LLD relative to controls [34, 36, 37]. […] lower FA has been associated with poorer performance on measures of cognitive functioning among patients with LLD [35, 38–40] and with measures of cerebrovascular risk severity. […] It is important to recognize that FA reflects the organization of fiber tracts, including fiber density, axonal diameter, or myelination in white matter. Thus, lower FA can result from multiple pathophysiological sources [42, 43]. […] Together, the aforementioned studies provide support for the vascular depression hypothesis. They demonstrate that white matter integrity is reduced in patients with LLD relative to controls, is somewhat specific to regions important for cognitive and emotional functioning, and is associated with cognitive functioning and depression severity. […] There is now a wealth of evidence to support the association between vascular pathology and depression in older age. While the etiology of depression in older age is multifactorial, from the epidemiological, neuroimaging, behavioral, and genetic evidence available, we can conclude that vascular depression represents one important subtype of LLD. The mechanisms underlying the relationship between vascular pathology and depression are likely multifactorial, and may include disrupted connections between key neural regions, reduced perfusion of blood to key brain regions integral to affective and cognitive processing, and inflammatory processes.”

Cognitive changes associated with depression have been the focus of research for decades. Results have been inconsistent, likely as a result of methodological differences in how depression is diagnosed and cognitive functioning measured, as well as the effects of potential subtypes and the severity of depression […], though deficits in executive functioning, learning and memory, and attention have been associated with depression in most studies [75]. In older adults, additional confounding factors include the potential presence of primary degenerative disorders, such as Alzheimer’s disease, which can pose a challenge to differential diagnosis in its early stages. […] LLD with cognitive dysfunction has been shown to result in greater disability than depressive symptoms alone [6], and MCI [mild cognitive impairment, US] with co-occurring LLD has been shown to double the risk of developing Alzheimer’s disease (AD) compared to MCI alone [86]. The conversion from MCI to AD also appears to occur earlier in patients with cooccurring depressive symptoms, as demonstrated by Modrego & Ferrandez [86] in their prospective cohort study of 114 outpatients diagnosed with amnestic MCI. […] Given accruing evidence for abnormal functioning of a number of cortical and subcortical networks in geriatric depression, of particular interest is whether these abnormalities are a reflection of the actively depressed state, or whether they may persist following successful resolution of symptoms. To date, studies have investigated this question through either longitudinal investigation of adults with geriatric depression, or comparison of depressed elders who are actively depressed versus those who have achieved symptom remission. Of encouragement, successful treatment has been reliably associated with normalization of some aspects of disrupted network functioning. For example, successful antidepressant treatment is associated with reduction of the elevated cerebral glucose metabolism observed during depressed states (e.g., [71–74]), with greater symptom reduction associated with greater metabolic change […] Taken together, these studies suggest that although a subset of the functional abnormalities observed during the LLD state may resolve with successful treatment, other abnormalities persist and may be tied to damage to the structural connectivity in important affective and cognitive networks. […] studies suggest a chronic decrement in cognitive functioning associated with LLD that is not adequately addressed through improvement of depressive symptoms alone.”

A review of the literature on evidence-based treatments for LLD found that about 50 % of patients improved on antidepressants, but that the number needed to treat (NNT) was quite high (NNT = 8, [139]) and placebo effects were significant [140]. Additionally, no difference was demonstrated in the effectiveness of one antidepressant drug class over another […], and in one-third of patients, depression was resistant to monotherapy [140]. The addition of medications or switching within or between drug classes appears to result in improved treatment response for these patients [140, 141]. A meta-analysis of patient-level variables demonstrated that duration of depressive symptoms and baseline depression severity significantly predicts response to antidepressant treatment in LLD, with chronically depressed older patients with moderate-to-severe symptoms at baseline experiencing more improvement in symptoms than mildly and acutely depressed patients [142]. Pharmacological treatment response appears to range from incomplete to poor in LLD with co-occurring cognitive impairment.”

“[C]ompared to other formulations of prevention, such as primary, secondary, or tertiary — in which interventions are targeted at the level of disease/stage of disease — the IOM conceptual framework involves interventions that are targeted at the level of risk in the population [2]. […] [S]elective prevention studies have an important “numbers” advantage — similar to that of indicated prevention trials: the relatively high incidence of depression among persons with key risk markers enables investigator to test interventions with strong statistical power, even with somewhat modest sample sizes. This fact was illustrated by Schoevers and colleagues [3], in which the authors were able to account for nearly 50 % of total risk of late-life depression with consideration of only a handful of factors. Indeed, research, largely generated by groups in the Netherlands and the USA, has identified that selective prevention may be one of the most efficient approaches to late-life depression prevention, as they have estimated that targeting persons at high risk for depression — based on risk markers such as medical comorbidity, low social support, or physical/functional disability — can yield theoretical numbers needed to treat (NNTs) of approximately 5–7 in primary care settings [4–7]. […] compared to the findings from selective prevention trials targeting older persons with general health/medical problems, […] trials targeting older persons based on sociodemographic risk factors have been more mixed and did not reveal as consistent a pattern of benefits for selective prevention of depression.”

Few of the studies in the existing literature that involve interventions to prevent depression and/or reduce depressive symptoms in older populations have included economic evaluations [13]. The identification of cost-effective interventions to provide to groups at high risk for depression is an important public health goal, as such treatments may avert or reduce a significant amount of the disease burden. […] A study by Katon and colleagues [8] showed that elderly patients with either subsyndromal or major depression had significantly higher medical costs during the previous 6 months than those without depression; total healthcare costs were $1,045 to $1,700 greater, and total outpatient/ambulatory costs ranged from being $763 to $979 more, on average. Depressed patients had greater usage of health resources in every category of care examined, including those that are not mental health-related, such as emergency department visits. No difference in excess costs was found between patients with a DSM-IV depressive disorder and those with depressive symptoms only, however, as mean total costs were 51 % higher in the subthreshold depression group (95 % CI = 1.39–1.66) and 49 % higher in the MDD/dysthymia group (95 % CI = 1.28–1.72) than in the nondepressed group [8]. In a similar study, the usage of various types of health services by primary care patients in the Netherlands was assessed, and average costs were determined to be 1,403 more in depressed individuals versus control patients [21]. Study investigators once again observed that patients with depression had greater utilization of both non-mental and mental healthcare services than controls.”

“In order for routine depression screening in the elderly to be cost-effective […] appropriate follow-up measures must be taken with those who screen positive, including a diagnostic interview and/or referral to a mental health professional [this – the necessity/requirement of proper follow-up following screens in order for screening to be cost-effective – is incidentally a standard result in screening contexts, see also Juth & Munthe’s book – US] [23, 25]. For example, subsequent steps may include initiation of psychotherapy or antidepressant treatment. Thus, one reason that the USPSTF does not recommend screening for depression in settings where proper mental health resources do not exist is that the evidence suggests that outcomes are unlikely to improve without effective follow-up care […]  as per the USPSTF suggestion, Medicare will only cover the screening when the appropriate supports for proper diagnosis and treatment are available […] In order to determine which interventions to prevent and treat depression should be provided to those who screen positive for depressive symptoms and to high-risk populations in general, cost-effectiveness analyses must be completed for a variety of different treatments and preventive measures. […] questions remain regarding whether annual versus other intervals of screening are most cost-effective. With respect to preventive interventions, the evidence to date suggests that these are cost-effective in settings where those at the highest risk are targeted.”


February 19, 2018 Posted by | Books, Cardiology, Diabetes, Health Economics, Neurology, Pharmacology, Psychiatry, Psychology | Leave a comment

Systems Biology (III)

Some observations from chapter 4 below:

The need to maintain a steady state ensuring homeostasis is an essential concern in nature while negative feedback loop is the fundamental way to ensure that this goal is met. The regulatory system determines the interdependences between individual cells and the organism, subordinating the former to the latter. In trying to maintain homeostasis, the organism may temporarily upset the steady state conditions of its component cells, forcing them to perform work for the benefit of the organism. […] On a cellular level signals are usually transmitted via changes in concentrations of reaction substrates and products. This simple mechanism is made possible due to limited volume of each cell. Such signaling plays a key role in maintaining homeostasis and ensuring cellular activity. On the level of the organism signal transmission is performed by hormones and the nervous system. […] Most intracellular signal pathways work by altering the concentrations of selected substances inside the cell. Signals are registered by forming reversible complexes consisting of a ligand (reaction product) and an allosteric receptor complex. When coupled to the ligand, the receptor inhibits the activity of its corresponding effector, which in turn shuts down the production of the controlled substance ensuring the steady state of the system. Signals coming from outside the cell are usually treated as commands (covalent modifications), forcing the cell to adjust its internal processes […] Such commands can arrive in the form of hormones, produced by the organism to coordinate specialized cell functions in support of general homeostasis (in the organism). These signals act upon cell receptors and are usually amplified before they reach their final destination (the effector).”

“Each concentration-mediated signal must first be registered by a detector. […] Intracellular detectors are typically based on allosteric proteins. Allosteric proteins exhibit a special property: they have two stable structural conformations and can shift from one form to the other as a result of changes in ligand concentrations. […] The concentration of a product (or substrate) which triggers structural realignment in the allosteric protein (such as a regulatory enzyme) depends on the genetically-determined affinity of the active site to its ligand. Low affinity results in high target concentration of the controlled substance while high affinity translates into lower concentration […]. In other words, high concentration of the product is necessary to trigger a low-affinity receptor (and vice versa). Most intracellular regulatory mechanisms rely on noncovalent interactions. Covalent bonding is usually associated with extracellular signals, generated by the organism and capable of overriding the cell’s own regulatory mechanisms by modifying the sensitivity of receptors […]. Noncovalent interactions may be compared to requests while covalent signals are treated as commands. Signals which do not originate in the receptor’s own feedback loop but modify its affinity are known as steering signals […] Hormones which act upon cells are, by their nature, steering signals […] Noncovalent interactions — dependent on substance concentrations — impose spatial restrictions on regulatory mechanisms. Any increase in cell volume requires synthesis of additional products in order to maintain stable concentrations. The volume of a spherical cell is given as V = 4/3 π r3, where r indicates cell radius. Clearly, even a slight increase in r translates into a significant increase in cell volume, diluting any products dispersed in the cytoplasm. This implies that cells cannot expand without incurring great energy costs. It should also be noted that cell expansion reduces the efficiency of intracellular regulatory mechanisms because signals and substrates need to be transported over longer distances. Thus, cells are universally small, regardless of whether they make up a mouse or an elephant.”

An effector is an element of a regulatory loop which counteracts changes in the regulated quantity […] Synthesis and degradation of biological compounds often involves numerous enzymes acting in sequence. The product of one enzyme is a substrate for another enzyme. With the exception of the initial enzyme, each step of this cascade is controlled by the availability of the supplied substrate […] The effector consists of a chain of enzymes, each of which depends on the activity of the initial regulatory enzyme […] as well as on the activity of its immediate predecessor which supplies it with substrates. The function of all enzymes in the effector chain is indirectly dependent on the initial enzyme […]. This coupling between the receptor and the first link in the effector chain is a universal phenomenon. It can therefore be said that the initial enzyme in the effector chain is, in fact, a regulatory enzyme. […] Most cell functions depend on enzymatic activity. […] It seems that a set of enzymes associated with a specific process which involves a negative feedback loop is the most typical form of an intracellular regulatory effector. Such effectors can be controlled through activation or inhibition of their associated enzymes.”

“The organism is a self-contained unit represented by automatic regulatory loops which ensure homeostasis. […] Effector functions are conducted by cells which are usually grouped and organized into tissues and organs. Signal transmission occurs by way of body fluids, hormones or nerve connections. Cells can be treated as automatic and potentially autonomous elements of regulatory loops, however their specific action is dependent on the commands issued by the organism. This coercive property of organic signals is an integral requirement of coordination, allowing the organism to maintain internal homeostasis. […] Activities of the organism are themselves regulated by their own negative feedback loops. Such regulation differs however from the mechanisms observed in individual cells due to its place in the overall hierarchy and differences in signal properties, including in particular:
• Significantly longer travel distances (compared to intracellular signals);
• The need to maintain hierarchical superiority of the organism;
• The relative autonomy of effector cells. […]
The relatively long distance travelled by organism’s signals and their dilution (compared to intracellular ones) calls for amplification. As a consequence, any errors or random distortions in the original signal may be drastically exacerbated. A solution to this problem comes in the form of encoding, which provides the signal with sufficient specificity while enabling it to be selectively amplified. […] a loudspeaker can […] assist in acoustic communication, but due to the lack of signal encoding it cannot compete with radios in terms of communication distance. The same reasoning applies to organism-originated signals, which is why information regarding blood glucose levels is not conveyed directly by glucose but instead by adrenalin, glucagon or insulin. Information encoding is handled by receptors and hormone-producing cells. Target cells are capable of decoding such signals, thus completing the regulatory loop […] Hormonal signals may be effectively amplified because the hormone itself does not directly participate in the reaction it controls — rather, it serves as an information carrier. […] strong amplification invariably requires encoding in order to render the signal sufficiently specific and unambiguous. […] Unlike organisms, cells usually do not require amplification in their internal regulatory loops — even the somewhat rare instances of intracellular amplification only increase signal levels by a small amount. Without the aid of an amplifier, messengers coming from the organism level would need to be highly concentrated at their source, which would result in decreased efficiency […] Most signals originated on organism’s level travel with body fluids; however if a signal has to reach its destination very rapidly (for instance in muscle control) it is sent via the nervous system”.

“Two types of amplifiers are observed in biological systems:
1. cascade amplifier,
2. positive feedback loop. […]
A cascade amplifier is usually a collection of enzymes which perform their action by activation in strict sequence. This mechanism resembles multistage (sequential) synthesis or degradation processes, however instead of exchanging reaction products, amplifier enzymes communicate by sharing activators or by directly activating one another. Cascade amplifiers are usually contained within cells. They often consist of kinases. […] Amplification effects occurring at each stage of the cascade contribute to its final result. […] While the kinase amplification factor is estimated to be on the order of 103, the phosphorylase cascade results in 1010-fold amplification. It is a stunning value, though it should also be noted that the hormones involved in this cascade produce particularly powerful effects. […] A positive feedback loop is somewhat analogous to a negative feedback loop, however in this case the input and output signals work in the same direction — the receptor upregulates the process instead of inhibiting it. Such upregulation persists until the available resources are exhausted.
Positive feedback loops can only work in the presence of a control mechanism which prevents them from spiraling out of control. They cannot be considered self-contained and only play a supportive role in regulation. […] In biological systems positive feedback loops are sometimes encountered in extracellular regulatory processes where there is a need to activate slowly-migrating components and greatly amplify their action in a short amount of time. Examples include blood coagulation and complement factor activation […] Positive feedback loops are often coupled to negative loop-based control mechanisms. Such interplay of loops may impart the signal with desirable properties, for instance by transforming a flat signals into a sharp spike required to overcome the activation threshold for the next stage in a signalling cascade. An example is the ejection of calcium ions from the endoplasmic reticulum in the phospholipase C cascade, itself subject to a negative feedback loop.”

“Strong signal amplification carries an important drawback: it tends to “overshoot” its target activity level, causing wild fluctuations in the process it controls. […] Nature has evolved several means of signal attenuation. The most typical mechanism superimposes two regulatory loops which affect the same parameter but act in opposite directions. An example is the stabilization of blood glucose levels by two contradictory hormones: glucagon and insulin. Similar strategies are exploited in body temperature control and many other biological processes. […] The coercive properties of signals coming from the organism carry risks associated with the possibility of overloading cells. The regulatory loop of an autonomous cell must therefore include an “off switch”, controlled by the cell. An autonomous cell may protect itself against excessive involvement in processes triggered by external signals (which usually incur significant energy expenses). […] The action of such mechanisms is usually timer-based, meaning that they inactivate signals following a set amount of time. […] The ability to interrupt signals protects cells from exhaustion. Uncontrolled hormone-induced activity may have detrimental effects upon the organism as a whole. This is observed e.g. in the case of the vibrio cholerae toxin which causes prolonged activation of intestinal epithelial cells by locking protein G in its active state (resulting in severe diarrhea which can dehydrate the organism).”

“Biological systems in which information transfer is affected by high entropy of the information source and ambiguity of the signal itself must include discriminatory mechanisms. These mechanisms usually work by eliminating weak signals (which are less specific and therefore introduce ambiguities). They create additional obstacles (thresholds) which the signals must overcome. A good example is the mechanism which eliminates the ability of weak, random antigens to activate lymphatic cells. It works by inhibiting blastic transformation of lymphocytes until a so-called receptor cap has accumulated on the surface of the cell […]. Only under such conditions can the activation signal ultimately reach the cell nucleus […] and initiate gene transcription. […] weak, reversible nonspecific interactions do not permit sufficient aggregation to take place. This phenomenon can be described as a form of discrimination against weak signals. […] Discrimination may also be linked to effector activity. […] Cell division is counterbalanced by programmed cell death. The most typical example of this process is apoptosis […] Each cell is prepared to undergo controlled death if required by the organism, however apoptosis is subject to tight control. Cells protect themselves against accidental triggering of the process via IAP proteins. Only strong proapoptotic signals may overcome this threshold and initiate cellular suicide”.

Simply knowing the sequences, structures or even functions of individual proteins does not provide sufficient insight into the biological machinery of living organisms. The complexity of individual cells and entire organisms calls for functional classification of proteins. This task can be accomplished with a proteome — a theoretical construct where individual elements (proteins) are grouped in a way which acknowledges their mutual interactions and interdependencies, characterizing the information pathways in a complex organism.
Most ongoing proteome construction projects focus on individual proteins as the basic building blocks […] [We would instead argue in favour of a model in which] [t]he basic unit of the proteome is one negative feedback loop (rather than a single protein) […]
Due to the relatively large number of proteins (between 25 and 40 thousand in the human organism), presenting them all on a single graph with vertex lengths corresponds to the relative duration of interactions would be unfeasible. This is why proteomes are often subdivided into functional subgroups such as the metabolome (proteins involved in metabolic processes), interactome (complex-forming proteins), kinomes (proteins which belong to the kinase family) etc.”


February 18, 2018 Posted by | Biology, Books, Chemistry, Genetics, Medicine | Leave a comment

Prevention of Late-Life Depression (I)

Late-life depression is a common and highly disabling condition and is also associated with higher health care utilization and overall costs. The presence of depression may complicate the course and treatment of comorbid major medical conditions that are also highly prevalent among older adults — including diabetes, hypertension, and heart disease. Furthermore, a considerable body of evidence has demonstrated that, for older persons, residual symptoms and functional impairment due to depression are common — even when appropriate depression therapies are being used. Finally, the worldwide phenomenon of a rapidly expanding older adult population means that unprecedented numbers of seniors — and the providers who care for them — will be facing the challenge of late-life depression. For these reasons, effective prevention of late-life depression will be a critical strategy to lower overall burden and cost from this disorder. […] This textbook will illustrate the imperative for preventing late-life depression, introduce a broad range of approaches and key elements involved in achieving effective prevention, and provide detailed examples of applications of late-life depression prevention strategies”.

I gave the book two stars on goodreads. There are 11 chapters in the book, written by 22 different contributors/authors, so of course there’s a lot of variation in the quality of the material included; the two star rating was an overall assessment of the quality of the material, and the last two chapters – but in particular chapter 10 – did a really good job convincing me that the the book did not deserve a 3rd star (if you decide to read the book, I advise you to skip chapter 10). In general I think many of the authors are way too focused on statistical significance and much too hesitant to report actual effect sizes, which are much more interesting. Gender is mentioned repeatedly throughout the coverage as an important variable, to the extent that people who do not read the book carefully might think this is one of the most important variables at play; but when you look at actual effect sizes, you get reported ORs of ~1.4 for this variable, compared to e.g. ORs in the ~8-9 for the bereavement variable (see below). You can quibble about population attributable fraction and so on here, but if the effect size is that small it’s unlikely to be all that useful in terms of directing prevention efforts/resource allocation (especially considering that women make out the majority of the total population in these older age groups anyway, as they have higher life expectancy than their male counterparts).

Anyway, below I’ve added some quotes and observations from the first few chapters of the book.

Meta-analyses of more than 30 randomized trials conducted in the High Income Countries show that the incidence of new depressive and anxiety disorders can be reduced by 25–50 % over 1–2 years, compared to usual care, through the use of learning-based psychotherapies (such as interpersonal psychotherapy, cognitive behavioral therapy, and problem solving therapy) […] The case for depression prevention is compelling and represents the key rationale for this volume: (1) Major depression is both prevalent and disabling, typically running a relapsing or chronic course. […] (2) Major depression is often comorbid with other chronic conditions like diabetes, amplifying the disability associated with these conditions and worsening family caregiver burden. (3) Depression is associated with worse physical health outcomes, partly mediated through poor treatment adherence, and it is associated with excess mortality after myocardial infarction, stroke, and cancer. It is also the major risk factor for suicide across the life span and particularly in old age. (4) Available treatments are only partially effective in reducing symptom burden, sustaining remission, and averting years lived with disability.”

“[M]any people suffering from depression do not receive any care and approximately a third of those receiving care do not respond to current treatments. The risk of recurrence is high, also in older persons: half of those who have experienced a major depression will experience one or even more recurrences [4]. […] Depression increases the risk at death: among people suffering from depression the risk of dying is 1.65 times higher than among people without a depression [7], with a dose-response relation between severity and duration of depression and the resulting excess mortality [8]. In adults, the average length of a depressive episode is 8 months but among 20 % of people the depression lasts longer than 2 years [9]. […] It has been estimated that in Australia […] 60 % of people with an affective disorder receive treatment, and using guidelines and standards only 34 % receives effective treatment [14]. This translates in preventing 15 % of Years Lived with Disability [15], a measure of disease burden [14] and stresses the need for prevention [16]. Primary health care providers frequently do not recognize depression, in particular among elderly. Older people may present their depressive symptoms differently from younger adults, with more emphasis on physical complaints [17, 18]. Adequate diagnosis of late-life depression can also be hampered by comorbid conditions such as Parkinson and dementia that may have similar symptoms, or by the fact that elderly people as well as care workers may assume that “feeling down” is part of becoming older [17, 18]. […] Many people suffering from depression do not seek professional help or are not identied as depressed [21]. Almost 14 % of elderly people living in community-type living suffer from a severe depression requiring clinical attention [22] and more than 50 % of those have a chronic course [4, 23]. Smit et al. reported an incidence of 6.1 % of chronic or recurrent depression among a sample of 2,200 elderly people (ages 55–85) [21].”

“Prevention differs from intervention and treatment as it is aimed at general population groups who vary in risk level for mental health problems such as late-life depression. The Institute of Medicine (IOM) has introduced a prevention framework, which provides a useful model for comprehending the different objectives of the interventions [29]. The overall goal of prevention programs is reducing risk factors and enhancing protective factors.
The IOM framework distinguishes three types of prevention interventions: (1) universal preventive interventions, (2) selective preventive interventions, and (3) indicated preventive interventions. Universal preventive interventions are targeted at the general audience, regardless of their risk status or the presence of symptoms. Selective preventive interventions serve those sub-populations who have a significantly higher than average risk of a disorder, either imminently or over a lifetime. Indicated preventive interventions target identified individuals with minimal but detectable signs or symptoms suggesting a disorder. This type of prevention consists of early recognition and early intervention of the diseases to prevent deterioration [30]. For each of the three types of interventions, the goal is to reduce the number of new cases. The goal of treatment, on the other hand, is to reduce prevalence or the total number of cases. By reducing incidence you also reduce prevalence [5]. […] prevention research differs from treatment research in various ways. One of the most important differences is the fact that participants in treatment studies already meet the criteria for the illness being studied, such as depression. The intervention is targeted at improvement or remission of the specific condition quicker than if no intervention had taken place. In prevention research, the participants do not meet the specific criteria for the illness being studied and the overall goal of the intervention is to prevent the development of a clinical illness at a lower rate than a comparison group [5].”

A couple of risk factors [for depression] occur more frequently among the elderly than among young adults. The loss of a loved one or the loss of a social role (e.g., employment), decrease of social support and network, and the increasing change of isolation occur more frequently among the elderly. Many elderly also suffer from physical diseases: 64 % of elderly aged 65–74 has a chronic disease [36] […]. It is important to note that depression often co-occurs with other disorders such as physical illness and other mental health problems (comorbidity). Losing a spouse can have significant mental health effects. Almost half of all widows and widowers during the first year after the loss meet the criteria for depression according to the DSM-IV [37]. Depression after loss of a loved one is normal in times of mourning. However, when depressive symptoms persist during a longer period of time it is possible that a depression is developing. Zisook and Shuchter found that a year after the loss of a spouse 16 % of widows and widowers met the criteria of a depression compared to 4 % of those who did not lose their spouse [38]. […] People with a chronic physical disease are also at a higher risk of developing a depression. An estimated 12–36 % of those with a chronic physical illness also suffer from clinical depression [40]. […] around 25 % of cancer patients suffer from depression [40]. […] Depression is relatively common among elderly residing in hospitals and retirement- and nursing homes. An estimated 6–11 % of residents have a depressive illness and among 30 % have depressive symptoms [41]. […] Loneliness is common among the elderly. Among those of 60 years or older, 43 % reported being lonely in a study conducted by Perissinotto et al. […] Loneliness is often associated with physical and mental complaints; apart from depression it also increases the chance of developing dementia and excess mortality [43].”

From the public health perspective it is important to know what the potential health benefits would be if the harmful effect of certain risk factors could be removed. What health benefits would arise from this, at which efforts and costs? To measure this the population attributive fraction (PAF) can be used. The PAF is expressed in a percentage and demonstrates the decrease of the percentage of incidences (number of new cases) when the harmful effects of the targeted risk factors are fully taken away. For public health it would be more effective to design an intervention targeted at a risk factor with a high PAF than a low PAF. […] An intervention needs to be effective in order to be implemented; this means that it has to show a statistically significant difference with placebo or other treatment. Secondly, it needs to be effective; it needs to prove its benefits also in real life (“everyday care”) circumstances. Thirdly, it needs to be efficient. The measure to address this is the Number Needed to Be Treated (NNT). The NNT expresses how many people need to be treated to prevent the onset of one new case with the disorder; the lower the number, the more efficient the intervention [45]. To summarize, an indicated preventative intervention would ideally be targeted at a relatively small group of people with a high, absolute chance of developing the disease, and a risk profile that is responsible for a high PAF. Furthermore, there needs to be an intervention that is both effective and efficient. […] a more detailed and specific description of the target group results in a higher absolute risk, a lower NNT, and also a lower PAF. This is helpful in determining the costs and benefits of interventions aiming at more specific or broader subgroups in the population. […] Unfortunately very large samples are required to demonstrate reductions in universal or selected interventions [46]. […] If the incidence rate is higher in the target population, which is usually the case in selective and even more so in indicated prevention, the number of participants needed to prove an effect is much smaller [5]. This shows that, even though universal interventions may be effective, its effect is harder to prove than that of indicated prevention. […] Indicated and selective preventions appear to be the most successful in preventing depression to date; however, more research needs to be conducted in larger samples to determine which prevention method is really most effective.”

Groffen et al. [6] recently conducted an investigation among a sample of 4,809 participants from the Reykjavik Study (aged 66–93 years). Similar to the findings presented by Vink and colleagues [3], education level was related to depression risk: participants with lower education levels were more likely to report depressed mood in late-life than those with a college education (odds ratio [OR] = 1.87, 95 % confidence interval [CI] = 1.35–2.58). […] Results from a meta-analysis by Lorant and colleagues [8] showed that lower SES individuals had a greater odds of developing depression than those in the highest SES group (OR = 1.24, p= 0.004); however, the studies involved in this review did not focus on older populations. […] Cole and Dendukuri [10] performed a meta-analysis of studies involving middle-aged and older adult community residents, and determined that female gender was a risk factor for depression in this population (Pooled OR = 1.4, 95 % CI = 1.2–1.8), but not old age. Blazer and colleagues [11] found a significant positive association between older age and depressive symptoms in a sample consisting of community-dwelling older adults; however, when potential confounders such as physical disability, cognitive impairment, and gender were included in the analysis, the relationship between chronological age and depressive symptoms was reversed (p< 0.01). A study by Schoevers and colleagues [14] had similar results […] these findings suggest that higher incidence of depression observed among the oldest-old may be explained by other relevant factors. By contrast, the association of female gender with increased risk of late-life depression has been observed to be a highly consistent finding.”

In an examination of marital bereavement, Turvey et al. [16] analyzed data among 5,449 participants aged70 years […] recently bereaved participants had nearly nine times the odds of developing syndromal depression as married participants (OR = 8.8, 95 % CI = 5.1–14.9, p<0.0001), and they also had significantly higher risk of depressive symptoms 2 years after the spousal loss. […] Caregiving burden is well-recognized as a predisposing factor for depression among older adults [18]. Many older persons are coping with physically and emotionally challenging caregiving roles (e.g., caring for a spouse/partner with a serious illness or with cognitive or physical decline). Additionally, many caregivers experience elements of grief, as they mourn the loss of relationship with or the decline of valued attributes of their care recipients. […] Concepts of social isolation have also been examined with regard to late-life depression risk. For example, among 892 participants aged 65 years […], Gureje et al. [13] found that women with a poor social network and rural residential status were more likely to develop major depressive disorder […] Harlow and colleagues [21] assessed the association between social network and depressive symptoms in a study involving both married and recently widowed women between the ages of 65 and 75 years; they found that number of friends at baseline had an inverse association with CES-D (Centers for Epidemiologic Studies Depression Scale) score after 1 month (p< 0.05) and 12 months (p= 0.06) of follow-up. In a study that explicitly addressed the concept of loneliness, Jaremka et al. [22] conducted a study relating this factor to late-life depression; importantly, loneliness has been validated as a distinct construct, distinguishable among older adults from depression. Among 229 participants (mean age = 70 years) in a cohort of older adults caring for a spouse with dementia, loneliness (as measured by the NYU scale) significantly predicted incident depression (p<0.001). Finally, social support has been identified as important to late-life depression risk. For example, Cui and colleagues [23] found that low perceived social support significantly predicted worsening depression status over a 2-year period among 392 primary care patients aged 65 years and above.”

“Saunders and colleagues [26] reported […] findings with alcohol drinking behavior as the predictor. Among 701 community-dwelling adults aged 65 years and above, the authors found a significant association between prior heavy alcohol consumption and late-life depression among men: compared to those who were not heavy drinkers, men with a history of heavy drinking had a nearly fourfold higher odds of being diagnosed with depression (OR = 3.7, 95 % CI = 1.3–10.4, p< 0.05). […] Almeida et al. found that obese men were more likely than non-obese (body mass index [BMI] < 30) men to develop depression (HR = 1.31, 95 % CI = 1.05–1.64). Consistent with these results, presence of the metabolic syndrome was also found to increase risk of incident depression (HR = 2.37, 95 % CI = 1.60–3.51). Finally, leisure-time activities are also important to study with regard to late-life depression risk, as these too are readily modifiable behaviors. For example, Magnil et al. [30] examined such activities among a sample of 302 primary care patients aged 60 years. The authors observed that those who lacked leisure activities had an increased risk of developing depressive symptoms over the 2-year study period (OR = 12, 95 % CI = 1.1–136, p= 0.041). […] an important future direction in addressing social and behavioral risk factors in late-life depression is to make more progress in trials that aim to alter those risk factors that are actually modifiable.”


February 17, 2018 Posted by | Books, Epidemiology, Health Economics, Medicine, Psychiatry, Psychology, Statistics | Leave a comment

Peripheral Neuropathy (II)

Chapter 3 included a great new (…new to me, that is…) chemical formula which I can’t not share here: (R)-(+)-[2,3-dihydro-5-methyl-3-(4-morpholinylmethyl)pyrrolo[1,2,3-de]-1,4-benzoxazin-6-yl]-1-naphthalenylmethanonemesylate. It’s a cannabinoid receptor agonist, the properties of which are briefly discussed in the book‘s chapter 3.

Anyway, some more observations from the book below:

Injuries affecting either the peripheral or the central nervous system (PNS, CNS) leads to neuropathic pain characterized by spontaneous pain and distortion or exaggeration of pain sensation. Peripheral nerve pathologies are considered generally easier to treat compared to those affecting the CNS, however peripheral neuropathies still remain a challenge to therapeutic treatment. […] Although first being thought as a disease of purely neuronal nature, several pre-clinical studies indicate that the mechanisms at the basis of the development and maintenance of neuropathic pain involve substantial contributions from the nonneuronal cells of both the PNS and CNS [22]. After peripheral nerve injury, microglia in the normal conditions (usually defined ‘‘resting’’ microglia) in the spinal dorsal horn proliferate and change their phenotype to an “activated” state through a series of cellular and molecular changes. Microglia shift their phenotype to the hypertrophic “activated” form following altered expression of several molecules including cell surface receptors, intracellular signalling molecules and diffusible factors. The activation process consists of distinct cellular functions aimed at repairing damaged neural cells and eliminating debris from the damaged area [23]. Damaged cells release chemo-attractant molecules that both increase the motility (i.e. chemo‐kinesis) and stimulate the migration (i.e. chemotaxis) of microglia, the combination of which recruits the microglia much closer to the damaged cells […] Once microglia become activated, they can exert both proinflammatory or anti-inflammatory/neuroprotective functions depending on the combination of the stimulation of several receptors and the expression of specific genes [31]. Thus, the activation of microglia following a peripheral injury can be considered as an adaptation to tissue stress and malfunction [32] that contribute to the development and subsequent maintenance of chronic pain [33, 34]. […] The signals responsible for neuron-microglia and/or astrocyte communication are being extensively investigated since they may represent new targets for chronic pain management.”

“In the past two decades a notable increase in the incidence of [upper extremity compression neuropathies] has occurred. […] it is mandatory to achieve a prompt diagnosis because they can produce important motor and sensory deficiencies that need to be treated before the development of complications, since, despite the capacity for regeneration bestowed on the peripheral nervous system, functions lost as a result of denervation are never fully restored. […] There are many different situations that may be a direct cause of nerve compression. Anatomically, nerves can be compressed when traversing fibro-osseous tunnels, passing between muscle layers, through traction as they cross joints or buckling during certain movements of the wrist and elbow. Other causes include trauma, direct pressure and space-occupying lesions at any level in the upper extremity. There are other situations that are not a direct cause of nerve compression, but may increase the risk and may predispose the nerve to be compressed specially when the soft tissues are swollen like synovitis, pregnancy, hypothyroidism, diabetes or alcoholism [1]. […] When nerve fibers undergo compression, the response depends on the force applied at the site and the duration. Acute, brief compression results in a focal conduction block as a result of local ischemia, being reversible if the duration of compression is transient. On the other hand, if the focal compression is prolonged, ischemic changes appear, followed by endoneurial edema and secondary perineurial thickening. These histological alterations will aggravate the changes in the microneural circulation and will increase the sensitivity of the neuron sheath to ischemia. If the compression continues, we will find focal demyelination, which typically results in a greater involvement of motor than sensory nerve fibers. […] As the duration of compression increases beyond several hours, more diffuse demyelination will appear […] This process begins at the distal end of compression or injury, a process termed wallerian degeneration. These neural changes may not appear at a uniform fashion among the whole neural sheath depending on the distribution of the compressive forces, causing mixed demyelinating and axonal injury resulting from a combination of mechanical distortion of the nerve, ischemic injury, and impaired axonal flow [2].”

Electrophysiologic testing is part of the evaluation [of compression neuropathies], but it never substitutes a complete history and a thorough physical examination. These tests can detect physiologic abnormalities in the course of motor and sensory axons. There are two main electrophysiologic tests: needle electromyography and nerve conduction […] The electromyography detects the voluntary or spontaneous generated electrical activity. The registry of this activity is made through the needle insertion, at rest and during muscular activity to assess duration, amplitude, configuration and recruitment after injury. […] Nerve conduction assesses for both sensory and motor nerves. This study consists in applying a voltage simulator to the skin over different points of the nerve in order to record the muscular action potential, analyzing the amplitude, duration, area, latency and conduction velocity. The amplitude indicates the number of available nerve fibers.”

There are three well-described entrapment syndromes involving the median nerve or its branches, namely pronator teres syndrome, anterior interosseous syndrome and carpal tunnel syndrome according to the level of entrapment. Each one of these syndromes presents with different clinical signs and symptoms, electrophysiologic results and requires different techniques for their release. […] [In pronator teres syndrome] [t]he onset is insidious and is suggested when the early sensory disturbances are greater on the thumb and index finger, mainly tingling, numbness and dysaesthesia in the median nerve distribution. Patients will also complain of increased pain in the proximal forearm and greater hand numbness with sustained power gripping or rotation […] Surgical decompression is the definitive treatment. […] [Anterior interosseous syndrome] presents principally as weakness of the index finger and thumb, and the patient may complain of diffuse pain in the proximal forearm, which may be exacerbated during exercise and diminished with rest. The vast majority of patients begin with pain in the upper arm, elbow and forearm, often preceding the motor symptoms. […] During physical exam, the patient will be unable to bend the tip of the thumb and tip of index finger. The typical symptom is the inability to form an “O” with the thumb and index finger. […] If the onset was spontaneous and there is no evident lesion on MRI, supportive care and corticosteroid injections with observation for 4 to 6 weeks is usually accepted management. The degree of recovery is unpredictable.”

“[Carpal tunnel syndrome] is the most frequently encountered compression neuropathy in the upper limb. It is a mechanical compression of the median nerve through the fixed space of the rigid carpal tunnel. The incidence in the United States has been estimated at 1 to 3 cases per 1,000 subjects per year, with a prevalence of 50 cases per 1,000 subjects per year. [10] It is more common in women than in men (2:1), perhaps because the carpal tunnel itself may be smaller in women than in men. The dominant hand is usually affected first and produces the most severe pain. It usually occurs in adults […] Abnormalities on electrophysiologic testing, in association with specific symptoms and signs, are considered the criterion standard for carpal tunnel syndrome diagnosis. Electrophysiologic testing also can provide an accurate assessment of how severe the damage to the nerve is, thereby directing management and providing objective criteria for the determination of prognosis. Carpal tunnel syndrome is usually divided into mild, moderate and severe. In general, patients with mild carpal tunnel syndrome have sensory abnormalities alone on electrophysiologic testing, and patients with sensory plus motor abnormalities have moderate carpal tunnel syndrome. However, any evidence of axonal loss is classified as severe carpal tunnel syndrome. […] No imaging studies are considered routine in the diagnosis of carpal tunnel syndrome. […] nonoperative treatment is based in splintage of the wrist in a neutral position for three weeks and steroid injections. This therapy has variable results, with a success rate up to 76% during one year, but with a recurrence rate as high as 94%. Non-operative treatment is indicated in patients with intermittent symptoms, initial stages and during pregnancy [17]. The only definitive treatment for carpal tunnel syndrome is surgical expansion of the carpal tunnel by transection of the transverse carpal ligament.”

Postural control can be defined as the control of the body’s position in space for the purposes of balance and orientation. Balance is the ability to maintain or return the body’s centre of gravity within the limits of stability that are determined by the base of support. Spatial orientation defines our natural ability to maintain our body orientation in relation to the surrounding environment, in static and dynamic conditions. The representation of the body’s static and dynamic geometry may be largely based on muscle proprioceptive inputs that continuously inform the central nervous system about the position of each part of the body in relation to the others. Posture is built up by the sum of several basic mechanisms. […] Postural balance is dependent upon integration of signals from the somatosensory, visual and vestibular systems, to generate motor responses, with cognitive demands that vary according to the task, the age of the individuals and their ability to balance. Descending postural commands are multivariate in nature, and the motion at each joint is affected uniquely by input from multiple sensors.
The proprioceptive system provides information on joint angles, changes in joint angles, joint position and muscle length and tension; while the tactile system is associated mainly with sensations of touch, pressure and vibration. Visual influence on postural control results from a complex synergy that receives multimodal inputs. Vestibular inputs tonically activate the anti-gravity leg muscles and, during dynamic tasks, vestibular information contributes to head stabilization to enable successful gaze control, providing a stable reference frame from which to generate postural responses. In order to assess instability or walking difficulty, it is essential to identify the affected movements and circumstances in which they occur (i.e. uneven surfaces, environmental light, activity) as well as any other associated clinical manifestation that could be related
to balance, postural control, motor control, muscular force, movement limitations or sensory deficiency. The clinical evaluation should include neurological examination; special care should be taken to identify visual and vestibular disorders, and to assess static and dynamic postural control and gait.”

Polyneuropathy modify the amount and the quality of the sensorial information that is necessary for motor control, with increased instability during both, upright stance and gait. Patients with peripheral neuropathy may have decreased stability while standing and when subjected to dynamic balance conditions. […] Balance and gait difficulties are the most frequently cited cause of falling […] Patients with polyneuropathy who have ankle weakness are more likely to experience multiple and injurious falls than are those without specific muscle weakness. […] During upright stance, compared to healthy subjects, recordings of the centre of pressure in patients with diabetic neuropathy have shown larger sway [95-96, 102], as well as increased oscillation […] Compared to healthy subjects, diabetic patients may have poorer balance during standing in diminished light compared to full light and no light conditions [105] […] compared to patients with diabetes but no peripheral neuropathy, patients with diabetic peripheral neuropathy are more likely to report an injury during walking or standing, which may be more frequent when walking on irregular surfaces [110]. Epidemiological surveys have established that a reduction of leg proprioception is a risk factor for falls in the elderly [111-112]. Symptoms and signs of peripheral neuropathy are frequently found during physical examination of older subjects. These clinical manifestations may be related to diabetes mellitus, alcoholism, nutritional deficiencies, autoimmune diseases, among other causes. In this group of patients, loss of plantar sensation may be an important contributor to the dynamic balance deficits and increased risk of falls [34, 109]. […] Apart from sensorymotor compromise, fear of falling may relate to restriction and avoidance of activities, which results in loss of strength especially in the lower extremities, and may also be predictive for future falls [117-119].”

“In patients with various forms of peripheral neuropathy, the use of a cane, ankle orthoses or touching a wall [has been shown to improve] spatial and temporal measures of gait regularity while walking under challenging conditions. Additional hand contact of external objects may reduce postural instability caused by a deficiency of one or more senses. […] Contact of the index finger with a stationary surface can greatly attenuate postural instability during upright stance, even when the level of force applied is far below that necessary to provide mechanical support [42]. […] haptic information about postural sway derived from contact with other parts of the body can also increase stability […] Studies evaluating preventive and treatment strategies through excercise [sic – US] that could improve balance in patients with polyneuropathy are scarce. However, evidence support that physical activity interventions that increase activity probably do not increase the risk of falling in patients with diabetic peripheral neuropathy, and in this group of patients, specific training may improve gait speed, balance, muscle strength and joint mobility.”

“Postherpetic neuralgia (PHN) is a form of refractory chronic neuralgia that […] currently lacks any effective prophylaxis. […] PHN has a variety of symptoms and significantly affects patient quality of life [3-12]. Various studies have statistically analyzed predictive factors for PHN [13-23], but neither obvious pathogenesis nor established treatment has been clarified or established. We designed and conducted a study on the premise that statistical identification of significant predictors for PHN would contribute to the establishment of an evidence-based medicine approach to the optimal treatment of PHN. […] Previous studies have shown that older age, female sex, presence of a prodrome, greater rash severity, and greater acute pain severity are predictors of increased PHN [14-18, 25]. Some other potential predictors (ophthalmic localization, presence of anxiety and depression, presence of allodynia, and serological/virological factors) have also been studied [14, 18]. […] The participants were 73 patients with herpes zoster who had been treated at the pain clinic of our hospital between January 2008 and June 2010. […] Multivariate ordered logistic regression analysis was performed to identify predictive factors for PHN. […] advanced age and deep pain at first visit were identified as predictive factors for PHN. DM [diabetes mellitus – US] and pain reduced by bathing should also be considered as potential predictors of PHN [24].”


February 14, 2018 Posted by | Books, Diabetes, Infectious disease, Medicine, Neurology | Leave a comment

Peripheral Neuropathy (I)

The objective of this book is to update health care professionals on recent advances in the pathogenesis, diagnosis and treatment of peripheral neuropathy. This work was written by a group of clinicians and scientists with large expertise in the field.

The book is not the first book about this topic I’ve read, so a lot of the stuff included was of course review – however it’s a quite decent text, and I decided to blog it in at least some detail anyway. It’s somewhat technical and it’s probably not a very good introduction to this topic if you know next to nothing about neurology – in that case I’m certain Said’s book (see the ‘not’-link above) is a better option.

I have added some observations from the first couple of chapters below. As InTech publications like these explicitly encourage people to share the ideas and observations included in these books, I shall probably cover the book in more detail than I otherwise would have.

“Within the developing world, infectious diseases [2-4] and trauma [5] are the most common sources of neuropathic pain syndromes. The developed world, in contrast, suffers more frequently from diabetic polyneuropathy (DPN) [6, 7], post herpetic neuralgia (PHN) from herpes zoster infections [8], and chemotherapy-induced peripheral neuropathy (CIPN) [9, 10]. There is relatively little epidemiological data regarding the prevalence of neuropathic pain within the general population, but a few estimates suggest it is around 7-8% [11, 12]. Despite the widespread occurrence of neuropathic pain, treatment options are limited and often ineffective […] Neuropathic pain can present as on-going or spontaneous discomfort that occurs in the absence of any observable stimulus or a painful hypersensitivity to temperature and touch. […] people with chronic pain have increased incidence of anxiety and depression and reduced scores in quantitative measures of health related quality of life [15]. Despite significant progress in chronic and neuropathic pain research, which has led to the discovery of several efficacious treatments in rodent models, pain management in humans remains ineffective and insufficient [16]. The lack of translational efficiency may be due to inadequate animal models that do not faithfully recapitulate human disease or from biological differences between rodents and humans […] In an attempt to increase the efficacy of medical treatment for neuropathic pain, clinicians and researchers have been moving away from an etiology based classification towards one that is mechanism based. It is current practice to diagnose a person who presents with neuropathic pain according to the underlying etiology and lesion topography [17]. However, this does not translate to effective patient care as these classification criteria do not suggest efficacious treatment. A more apt diagnosis might include a description of symptoms and the underlying pathophysiology associated with those symptoms.”

Neuropathic pain has been defined […] as “pain arising as the direct consequence of a lesion or disease affecting the somatosensory system” [18]. This is distinct from nociceptive pain – which signals tissue damage through an intact nervous system – in underlying pathophysiology, severity, and associated psychological comorbidities [13]. Individuals who suffer from neuropathic pain syndromes report pain of higher intensity and duration than individuals with non-neuropathic chronic pain and have significantly increased incidence of depression, anxiety, and sleep disorders [13, 19]. […] individuals with seemingly identical diseases who both develop neuropathic pain may experience distinct abnormal sensory phenotypes. This may include a loss of sensory perception in some modalities and increased activity in others. Often a reduction in the perception of vibration and light touch is coupled with positive sensory symptoms such as paresthesia, dysesthesia, and pain[20]. Pain may manifest as either spontaneous, with a burning or shock-like quality, or as a hypersensitivity to mechanical or thermal stimuli [21]. This hypersensitivity takes two forms: allodynia, pain that is evoked from a normally non-painful stimulus, and hyperalgesia, an exaggerated pain response from a moderately painful stimulus. […] Noxious stimuli are perceived by small diameter peripheral neurons whose free nerve endings are distributed throughout the body. These neurons are distinct from, although anatomically proximal to, the low threshold mechanoreceptors responsible for the perception of vibration and light touch.”

In addition to hypersensitivity, individuals with neuropathic pain frequently experience ongoing spontaneous pain as a major source of discomfort and distress. […] In healthy individuals, a quiescent neuron will only generate an action potential when presented with a stimulus of sufficient magnitude to cause membrane depolarization. Following nerve injury, however, significant changes in ion channel expression, distribution, and kinetics lead to disruption of the homeostatic electric potential of the membrane resulting in oscillations and burst firing. This manifests as spontaneous pain that has a shooting or burning quality […] There is reasonable evidence to suggest that individual ion channels contribute to specific neuropathic pain symptoms […] [this observation] provides an intriguing therapeutic possibility: unambiguous pharmacologic ion channel blockers to relieve individual sensory symptoms with minimal unintended effects allowing pain relief without global numbness. […] Central sensitization leads to painful hypersensitivity […] Functional and structural changes of dorsal horn circuitry lead to pain hypersensitivity that is maintained independent of peripheral sensitization [38]. This central sensitization provides a mechanistic explanation for the sensory abnormalities that occur in both acute and chronic pain states, such as the expansion of hypersensitivity beyond the innervation territory of a lesion site, repeated stimulation of a constant magnitude leading to an increasing pain response, and pain outlasting a peripheral stimulus [39-41]. In healthy individuals, acute pain triggers central sensitization, but homeostatic sensitivity returns following clearance of the initial insult. In some individuals who develop neuropathic pain, genotype and environmental factors contribute to maintenance of central sensitization leading to spontaneous pain, hyperalgesia, and allodynia. […] Similarly, facilitation also results in a lowered activation threshold in second order neurons”.

“Chronic pain conditions are associated with vast functional and structural changes of the brain, when compared to healthy controls, but it is currently unclear which comes first: does chronic pain cause distortions of brain circuitry and anatomy or do cerebral abnormalities trigger and/or maintain the perception of chronic pain? […] Brain abnormalities in chronic pain states include modification of brain activity patterns, localized decreases in gray matter volume, and circuitry rerouting [53]. […] Chronic pain conditions are associated with localized reduction in gray matter volume, and the topography of gray matter volume reduction is dictated, at least in part, by the particular pathology. […] These changes appear to represent a form of plasticity as they are reversible when pain is effectively managed [63, 67, 68].”

“By definition, neuropathic pain indicates direct pathology of the nervous system while nociceptive pain is an indication of real or potential tissue damage. Due to the distinction in pathophysiology, conventional treatments prescribed for nociceptive pain are not very effective in treating neuropathic pain and vice versa [78]. Therefore the first step towards meaningful pain relief is an accurate diagnosis. […] Treating neuropathic pain requires a multifaceted approach that aims to eliminate the underlying etiology, when possible, and manage the associated discomforts and emotional distress. Although in some cases it is possible to directly treat the cause of neuropathic pain, for example surgery to alleviate a constricted nerve, it is more likely that the primary cause is untreatable, as is the case with singular traumatic events such as stroke and spinal cord injury and diseases like diabetes. When this is the case, symptom management and pain reduction become the primary focus. Unfortunately, in most cases complete elimination of pain is not a feasible endpoint; a pain reduction of 30% is considered to be efficacious [21]. Additionally, many pharmacological treatments require careful titration and tapering to prevent adverse effects and toxicity. This process may take several weeks to months, and ultimately the drug may be ineffective, necessitating another trial with a different medication. It is therefore necessary that both doctor and patient begin treatment with realistic expectations and goals.”

First-line medications for the treatment of neuropathic pain are those that have proven efficacy in randomized clinical trials (RCTs) and are consistent with pooled clinical observations [81]. These include antidepressants, calcium channel ligands, and topical lidocaine [15]. Tricyclic antidepressants (TCAs) have demonstrated efficacy in treating neuropathic pain with positive results in RCTs for central post-stroke pain, PHN, painful diabetic and non-diabetic polyneuropathy, and post-mastectomy pain syndrome [82]. However they do not seem to be effective in treating painful HIV-neuropathy or CIPN [82]. Duloxetine and venlafaxine, two selective serotonin norepinephrine reuptake inhibitors (SSNRIs), have been found to be effective in DPN and both DPN and painful polyneuropathies, respectively [81]. […] Gabapentin and pregabalin have also demonstrated efficacy in several neuropathic pain conditions including DPN and PHN […] Topical lidocaine (5% patch or gel) has significantly reduced allodynia associated with PHN and other neuropathic pain syndromes in several RCTs [81, 82]. With no reported systemic adverse effects and mild skin irritation as the only concern, lidocaine is an appropriate choice for treating localized peripheral neuropathic pain. In the event that first line medications, alone or in combination, are not effective at achieving adequate pain relief, second line medications may be considered. These include opioid analgesics and tramadol, pharmaceuticals which have proven efficacy in RCTs but are associated with significant adverse effects that warrant cautious prescription [15]. Although opioid analgesics are effective pain relievers in several types of neuropathic pain [81, 82, 84], they are associated with misuse or abuse, hypogonadism, constipation, nausea, and immunological changes […] Careful consideration should be given when prescribing opiates to patients who have a personal or family history of drug or alcohol abuse […] Deep brain stimulation, a neurosurgical technique by which an implanted electrode delivers controlled electrical impulses to targeted brain regions, has demonstrated some efficacy in treating chronic pain but is not routinely employed due to a high risk-to-benefit ratio [91]. […] A major challenge in treating neuropathic pain is the heterogeneity of disease pathogenesis within an individual etiological classification. Patients with seemingly identical diseases may experience completely different neuropathic pain phenotypes […] One of the biggest barriers to successful management of neuropathic pain has been the lack of understanding in the underlying pathophysiology that produces a pain phenotype. To that end, significant progress has been made in basic science research.”

In diabetes mellitus, nerves and their supporting cells are subjected to prolonged hyperglycemia and metabolic disturbances and this culminates in reversible/irreversible nervous system dysfunction and damage, namely diabetic peripheral neuropathy (DPN). Due to the varying compositions and extents of neurological involvements, it is difficult to obtain accurate and thorough prevalence estimates of DPN, rendering this microvascular complication vastly underdiagnosed and undertreated [1-4]. According to American Diabetes Association, DPN occurs to 60-70% of diabetic individuals [5] and represents the leading cause of peripheral neuropathies among all cases [6, 7].”

A quick remark: This number seems really high to me. I won’t rule out that it’s accurate if you go with highly sensitive measures of neuropathy, but the number of patients who will experience significant clinical sequelae as a result of DPN is in my opinion likely to be significantly lower than that. On a peripherally related note, it should however on the other hand also be kept in mind that although diabetes-related neurological complications may display some clustering in patient groups – which will necessarily decrease the magnitude of the problem – no single test will ever completely rule out neurological complications in a diabetic; a patient with a negative Semmes-Weinstein monofilament test may still have autonomic neuropathy. So assessing the full disease burden in the context of diabetes-related neurological complications cannot be done using only a single instrument, and the full disease burden is likely to be higher than individual estimates encountered in the literature (unless a full neurological workup was done, which is unlikely to be the case). They do go into more detail about subgroups, clinical significance, etc. below, but I thought this observation was important to add early on in this part of the coverage.

Because diverse anatomic distributions and fiber types may be differentially affected in patients with diabetes, the disease manifestations, courses and pathologies of clinical and subclinical DPN are rather heterogeneous and encompass a broad spectrum […] Current consensus divides diabetes-associated somatic neuropathic syndromes into the focal/multifocal and diffuse/generalized neuropathies [6, 14]. The first category comprises a group of asymmetrical, acute-in-onset and self-limited single lesion(s) of nerve injury or impairment largely resulting from the increased vulnerability of diabetic nerves to mechanical insults (Carpal Tunnel Syndrome) […]. Such mononeuropathies occur idiopathically and only become a clinical problem in association with aging in 5-10% of those affected. Therefore, focal neuropathies are not extensively covered in this chapter [16]. The rest of the patients frequently develop diffuse neuropathies characterized by symmetrical distribution, insidious onset and chronic progression. In particular, a distal symmetrical sensorimotor polyneuropathy accounts for 90% of all DPN diagnoses in type 1 and type 2 diabetics and affects all types of peripheral sensory and motor fibers in a temporally non-uniform manner [6, 17].
Symptoms begin with prickling, tingling, numbness, paresthesia, dysesthesia and various qualities of pain associated with small sensory fibers at the very distal end (toes) of lower extremities [1, 18]. Presence of the above symptoms together with abnormal nociceptive response of epidermal C and A-δ fibers to pain/temperature (as revealed by clinical examination) constitute the diagnosis of small fiber sensory neuropathy, which produces both painful and insensate phenotypes [19]. Painful diabetic neuropathy is a prominent, distressing and chronic experience in at least 10-30% of DPN populations [20, 21]. Its occurrence does not necessarily correlate with impairment in electrophysiological or quantitative sensory testing (QST). […] Large myelinated sensory fibers that innervate the dermis, such as Aβ, also become involved later on, leading to impaired proprioception, vibration and tactile detection, and mechanical hypoalgesia [19]. Following this “stocking-glove”, length-dependent and dying-back evolvement, neurodegeneration gradually proceeds to proximal muscle sensory and motor nerves. Its presence manifests in neurological testings as reduced nerve impulse conductions, diminished ankle tendon reflex, unsteadiness and muscle weakness [1, 24].
Both the absence of protective sensory response and motor coordination predispose neuropathic foot to impaired wound healing and gangrenous ulceration — often ensued by limb amputation in severe and/or advanced cases […]. Although symptomatic motor deficits only appear in later stages of DPN [25], motor denervation and distal atrophy can increase the rate of fractures by causing repetitive minor trauma or falls [24, 28]. Other unusual but highly disabling late sequelae of DPN include limb ischemia and joint deformity [6]; the latter also being termed Charcot’s neuroarthropathy or Charcot’s joints [1]. In addition to significant morbidities, several separate cohort studies provided evidence that DPN [29], diabetic foot ulcers [30] and increased toe vibration perception threshold (VPT) [31] are all independent risk factors for mortality.”

Unfortunately, current therapy for DPN is far from effective and at best only delays the onset and/or progression of the disease via tight glucose control […] Even with near normoglycemic control, a substantial proportion of patients still suffer the debilitating neurotoxic consequences of diabetes [34]. On the other hand, some with poor glucose control are spared from clinically evident signs and symptoms of neuropathy for a long time after diagnosis [37-39]. Thus, other etiological factors independent of hyperglycemia are likely to be involved in the development of DPN. Data from a number of prospective, observational studies suggested that older age, longer diabetes duration, genetic polymorphism, presence of cardiovascular disease markers, malnutrition, presence of other microvascular complications, alcohol and tobacco consumption, and higher constitutional indexes (e.g. weight and height) interact with diabetes and make for strong predictors of neurological decline [13, 32, 40-42]. Targeting some of these modifiable risk factors in addition to glycemia may improve the management of DPN. […] enormous efforts have been devoted to understanding and intervening with the molecular and biochemical processes linking the metabolic disturbances to sensorimotor deficits by studying diabetic animal models. In return, nearly 2,200 articles were published in PubMed central and at least 100 clinical trials were reported evaluating the efficacy of a number of pharmacological agents; the majority of them are designed to inhibit specific pathogenic mechanisms identified by these experimental approaches. Candidate agents have included aldose reductase inhibitors, AGE inhibitors, γ-linolenic acid, α-lipoic acid, vasodilators, nerve growth factor, protein kinase Cβ inhibitors, and vascular endothelial growth factor. Notwithstanding a fruitful of knowledge and promising results in animals, none has translated into definitive clinical success […] Based on the records published by National Institute of Neurological Disorders and Stroke (NINDS), a main source of DPN research, about 16,488 projects were funded at the expense of over $8 billion for the fiscal years of 2008 through 2012. Of these projects, an estimated 72,200 animals were used annually to understand basic physiology and disease pathology as well as to evaluate potential drugs [255]. As discussed above, however, the usefulness of these pharmaceutical agents developed through such a pipeline in preventing or reducing neuronal damage has been equivocal and usually halted at human trials due to toxicity, lack of efficacy or both […]. Clearly, the pharmacological translation from our decades of experimental modeling to clinical practice with regard to DPN has thus far not even [been] close to satisfactory.”

Whereas a majority of the drugs investigated during preclinical testing executed experimentally desired endpoints without revealing significant toxicity, more than half that entered clinical evaluation for treating DPN were withdrawn as a consequence of moderate to severe adverse events even at a much lower dose. Generally, using other species as surrogates for human population inherently encumbers the accurate prediction of toxic reactions for several reasons […] First of all, it is easy to dismiss drug-induced non-specific effects in animals – especially for laboratory rodents who do not share the same size, anatomy and physical activity with humans. […]  Second, some physiological and behavioral phenotypes observable in humans are impossible for animals to express. In this aspect, photosensitive skin rash and pain serve as two good examples of non-translatable side effects. Rodent skin differs from that of humans in that it has a thinner and hairier epidermis and distinct DNA repair abilities [260]. Therefore, most rodent stains used in diabetes modeling provide poor estimates for the probability of cutaneous hypersensitivity reactions to pharmacological treatments […] Another predicament is to assess pain in rodents. The reason for this is simple: these animals cannot tell us when, where or even whether they are experiencing pain […]. Since there is not any specific type of behavior to which painful reaction can be unequivocally associated, this often leads to underestimation of painful side effects during preclinical drug screening […] The third problem is that animals and humans have different pharmacokinetic and toxicological responses.”

“Genetic or chemical-induced diabetic rats or mice have been a major tool for preclinical pharmacological evaluation of potential DPN treatments. Yet, they do not faithfully reproduce many neuropathological manifestations in human diabetics. The difficulty of such begins with the fact that it is not possible to obtain in rodents a qualitative and quantitative expression of the clinical symptoms that are frequently presented in neuropathic diabetic patients, including spontaneous pain of different characteristics (e.g. prickling, tingling, burning, squeezing), paresthesia and numbness. As symptomatic changes constitute an important parameter of therapeutic outcome, this may well underlie the failure of some aforementioned drugs in clinical trials despite their good performance in experimental tests […] Development of nerve dysfunction in diabetic rodents also does not follow the common natural history of human DPN. […] Besides the lack of anatomical resemblance, the changes in disease severity are often missing in these models. […] importantly, foot ulcers that occur as a late complication to 15% of all individuals with diabetes [14] do not spontaneously develop in hyperglycemic rodents. Superimposed injury by experimental procedure in the foot pads of diabetic rats or mice may lend certain insight in the impaired wound healing in diabetes [278] but is not reflective of the chronic, accumulating pathological changes in diabetic feet of human counterparts. Another salient feature of human DPN that has not been described in animals is the predominant sensory and autonomic nerve damage versus minimal involvement of motor fibers [279]. This should elicit particular caution as the selective susceptibility is critical to our true understanding of the etiopathogenesis underlying distal sensorimotor polyneuropathy in diabetes. In addition to the lack of specificity, most animal models studied only cover a narrow spectrum of clinical DPN and have not successfully duplicated syndromes including proximal motor neuropathy and focal lesions [279].
Morphologically, fiber atrophy and axonal loss exist in STZ-rats and other diabetic rodents but are much milder compared to the marked degeneration and loss of myelinated and unmyelinated nerves readily observed in human specimens [280]. Of significant note, rodents are notoriously resistant to developing some of the histological hallmarks seen in diabetic patients, such as segmental and paranodal demyelination […] the simultaneous presence of degenerating and regenerating fibers that is characteristic of early DPN has not been clearly demonstrated in these animals [44]. Since such dynamic nerve degeneration/regeneration signifies an active state of nerve repair and is most likely to be amenable to therapeutic intervention, absence of this property makes rodent models a poor tool in both deciphering disease pathogenesis and designing treatment approaches […] With particular respect to neuroanatomy, a peripheral axon in humans can reach as long as one meter [296] whereas the maximal length of the axons innervating the hind limb is five centimeters in mice and twelve centimeters in rats. This short length makes it impossible to study in rodents the prominent length dependency and dying-back feature of peripheral nerve dysfunction that characterizes human DPN. […] For decades the cytoarchitecture of human islets was assumed to be just like those in rodents with a clear anatomical subdivision of β-cells and other cell types. By using confocal microscopy and multi-fluorescent labeling, it was finally uncovered that human islets have not only a substantially lower percentage of β-cell population, but also a mixed — rather than compartmentalized — organization of the different cell types [297]. This cellular arrangement was demonstrated to directly alter the functional performance of human islets as opposed to rodent islets. Although it is not known whether such profound disparities in cell composition and association also exist in the PNS, it might as well be anticipated considering the many sophisticated sensory and motor activities that are unique to humans. Considerable species difference also manifest at a molecular level. […] At least 80% of human genes have a counterpart in the mouse and rat genome. However, temporal and spatial expression of these genes can vary remarkably between humans and rodents, in terms of both extent and isoform specificity.”

“Ultimately, a fundamental problem associated with resorting to rodents in DPN research is to study a human disorder that takes decades to develop and progress in organisms with a maximum lifespan of 2-3 years. […] It is […] fair to say that a full clinical spectrum of the maturity-onset DPN likely requires a length of time exceeding the longevity of rodents to present and diabetic rodent models at best only help illustrate the very early aspects of the entire disease syndrome. Since none of the early pathogenetic pathways revealed in diabetic rodents will contribute to DPN in a quantitatively and temporally uniform fashion throughout the prolonged natural history of this disease, it is not surprising that a handful of inhibitors developed against these processes have not benefited patients with relatively long-standing neuropathy. As a matter of fact, any agents targeting single biochemical insults would be too little too late to treat a chronic neurological disorder with established nerve damage and pathogenetic heterogeneity […] It is important to point out that the present review does not argue against the ability of animal models to shed light on basic molecular, cellular and physiological processes that are shared among species. Undoubtedly, animal models of diabetes have provided abundant insights into the disease biology of DPN. Nevertheless, the lack of any meaningful advance in identifying a promising pharmacological target necessitates a reexamination of the validity of current DPN models as well as to offer a plausible alternative methodology to scientific approaches and disease intervention. […] we conclude that the fundamental species differences have led to misinterpretation of rodent data and overall failure of pharmacological investment. As more is being learned, it is becoming prevailing that DPN is a chronic, heterogeneous disease unlikely to benefit from targeting specific and early pathogenetic components revealed by animal studies.”


February 13, 2018 Posted by | Books, Diabetes, Genetics, Medicine, Neurology, Pharmacology | Leave a comment


Complexity theory is a topic I’ve previously been exposed to through various channels; examples include Institute for Advanced Studies comp sci lectures, notes included in a few computer science-related books like Louridas and Dasgupta, and probably also e.g. some of the systems analysis/-science books I’ve read – Konieczny et al.’s text which I recently finished reading is another example of a book which peripherally covers content also covered in this book. Holland’s book pretty much doesn’t cover computational complexity theory at all, but some knowledge of computer science will probably still be useful as e.g. concepts from graph theory are touched upon/applied in the coverage; I am also aware that I derived some benefit while reading this book from having previously spent time on signalling models in microeconomics, as there were conceptual similarities between those models and their properties and some of the stuff Holland includes. I’m not really sure if you need to know ‘anything’ to read the book and get something out of it, but although Holland doesn’t use much mathematical formalism some of the ‘hidden’ formalism lurking in the background will probably not be easy to understand if you e.g. haven’t seen a mathematical equation since the 9th grade, and people who e.g. have seen hierarchical models before will definitely have a greater appreciation of some of the material covered than people who have not. Obviously I’ve read a lot of stuff over time that made the book easier for me to read and understand than it otherwise would have been, but how easy would the book have been for me to read if I hadn’t read those other things? It’s really difficult for me to say. I found the book hard to judge/rate/evaluate, so I decided against rating it on goodreads.

Below I have added some quotes from the book.

“[C]omplex systems exhibits a distinctive property called emergence, roughly described by the common phrase ‘the action of the whole is more than the sum of the actions of the parts’. In addition to complex systems, there is a subfield of computer science, called computational complexity, which concerns itself with the difficulty of solving different kinds of problems. […] The object of the computational complexity subfield is to assign levels of difficulty — levels of complexity — to different collections of problems. There are intriguing conjectures about these levels of complexity, but an understanding of the theoretical framework requires a substantial background in theoretical computer science — enough to fill an entire book in this series. For this reason, and because computational complexity does not touch upon emergence, I will confine this book to systems and the ways in which they exhibit emergence. […] emergent behaviour is an essential requirement for calling a system ‘complex’. […] Hierarchical organization is […] closely tied to emergence. Each level of a hierarchy typically is governed by its own set of laws. For example, the laws of the periodic table govern the combination of hydrogen and oxygen to form H2O molecules, while the laws of fluid flow (such as the Navier-Stokes equations) govern the behaviour of water. The laws of a new level must not violate the laws of earlier levels — that is, the laws at lower levels constrain the laws at higher levels. […] Restated for complex systems: emergent properties at any level must be consistent with interactions specified at the lower level(s). […] Much of the motivation for treating a system as complex is to get at questions that would otherwise remain inaccessible. Often the first steps in acquiring a deeper understanding are through comparisons of similar systems. By treating hierarchical organization as sine qua non for complexity we focus on the interactions of emergent properties at various levels. The combination of ‘top–down’ effects (as when the daily market average affects actions of the buyers and sellers in an equities market) and ‘bottom–up’ effects (the interactions of the buyers and sellers determine the market average) is a pervasive feature of complex systems. The present exposition, then, centres on complex systems where emergence, and the reduction(s) involved, offer a key to new kinds of understanding.”

“As the field of complexity studies has developed, it has split into two subfields that examine two different kinds of emergence: the study of complex physical systems (CPS) and the study of complex adaptive systems (CAS): The study of complex physical systems focuses on geometric (often lattice-like) arrays of elements, in which interactions typically depend only on effects propagated from nearest neighbours. […] the study of CPS has a distinctive set of tools and questions centring on elements that have fixed properties – atoms, the squares of the cellular automaton, and the like. […] The tools used for studying CPS come, with rare exceptions, from a well-developed part of mathematics, the theory of partial differential equations […] CAS studies, in contrast to CPS studies, concern themselves with elements that are not fixed. The elements, usually called agents, learn or adapt in response to interactions with other agents. […] It is unusual for CAS agents to converge, even momentarily, to a single ‘optimal’ strategy, or to an equilibrium. As the agents adapt to each other, new agents with new strategies usually emerge. Then each new agent offers opportunities for still further interactions, increasing the overall complexity. […] The complex feedback loops that form make it difficult to analyse, or even describe, CAS. […] Analysis of complex systems almost always turns on finding recurrent patterns in the system’s ever-changing configurations. […] perpetual novelty, produced with a limited number of rules or laws, is a characteristic of most complex systems: DNA consists of strings of the same four nucleotides, yet no two humans are exactly alike; the theorems of Euclidian geometry are based on just five axioms, yet new theorems are still being derived after two millenia; and so it is for the other complex systems.”

“In a typical physical system the whole is (at least approximately) the sum of the parts, making the use of PDEs straightforward for a mathematician, but in a typical generated system the parts are put together in an interconnected, non-additive way. It is possible to write a concise set of partial differential equations to describe the basic elements of a computer, say an interconnected set of binary counters, but the existing theory of PDEs does little to increase our understanding of the circuits so-described. The formal grammar approach, in contrast, has already considerably increased our understanding of computer languages and programs. One of the major tasks of this book is to use a formal grammar to convert common features of complex systems into ‘stylized facts’ that can be examined carefully within the grammar.”

“Many CPS problems (e.g. the flow of electrons in superconductive materials) […] involve flows — flows that are nicely described by networks. Networks provide a detailed snapshot of CPS and complex adaptive systems (CAS) interactions at any given point in their development, but there are few studies of the evolution of networks […]. The distinction between the fast dynamic of flows (change of state) and the slow dynamic of adaptation (change of the network of interactions) often distinguishes CPS studies from CAS studies. […] all well-studied CAS exhibit lever points, points where a small directed action causes large predictable changes in aggregate behaviour, as when a vaccine produces long-term changes in an immune system. At present, lever points are almost always located by trial and error. However, by extracting mechanisms common to different lever points, a relevant CAS theory would provide a principled way of locating and testing lever points. […] activities that are easy to observe in one complex system often suggest ‘where to look’ in other complex systems where the activities are difficult to observe.”

“Observation shows that agents acting in a niche continually undergo ‘improvements’, without ever completely outcompeting other agents in the community. These improvements may come about in either of two ways: (i) an agent may become more of a generalist, processing resources from a wider variety of sources, or (ii) it may become more specialized, becoming more efficient than its competitors at exploiting a particular source of a vital resource. Both changes allow for still more interactions and still greater diversity. […] All CAS that have been examined closely exhibit trends toward increasing numbers of specialists.”

“Emergence is tightly tied to the formation of boundaries. These boundaries can arise from symmetry breaking, […] or they can arise by assembly of component building blocks […]. For CAS, the agent-defining boundaries determine the interactions between agents. […] Adaptation, and the emergence of new kinds of agents, then arises from changes in the relevant boundaries. Typically, a boundary only looks to a small segment of a signal, a tag, to determine whether or not the signal can pass through the boundary. […] an agent can be modelled by a set of conditional IF/THEN rules that represent both the effects of boundaries and internal signal-processing. Because tags are short, a given signal may carry multiple tags, and the rules that process signals can require the presence of more than one tag for the processing to proceed. Agents are parallel processors in the sense that all rules that are satisfied simultaneously in the agent are executed simultaneously. As a result, the interior of an agent will usually be filled with multiple signals […]. The central role of tags in routing signals through this complex interior puts emphasis on the mechanisms for tag modification as a means of adaptation. Recombination of extant conditions and signals […] turns tags into building blocks for specifying new routes. Parallel processing then makes it possible to test new routes so formed without seriously disrupting extant useful routes. Sophisticated agents have another means of adaptation: anticipation (‘lookahead’). If an agent has a set of rules that simulates part of its world, then it can run this internal model to examine the outcomes of different action sequences before those actions are executed.”

“The flow of signals within and between agents can be represented by a directed network, where nodes represent rules, and there is a connection from node x to node y if rule x sends a signal satisfying a condition of rule y. Then, the flow of signals over this network spells out the performance of the agent at a point in time. […] The networks associated with CAS are typically highly tangled, with many loops providing feedback and recirculation […]. An agent adapts by changing its signal-processing rules, with corresponding changes in the structure of the associated network. […] Most machine-learning models, including ‘artificial neural networks’ and ‘Bayesian networks’, lack feedback cycles — they are often called ‘feedforward networks’ (in contrast to networks with substantial feedback). In the terms used in Chapter 4, such networks have no ‘recirculation’ and hence have no autonomous subsystems. Networks with substantial numbers of cycles are difficult to analyse, but a large number of cycles is the essential requirement for the autonomous internal models that make lookahead and planning possible. […] The complexities introduced by loops have so far resisted most attempts at analysis. […] The difficulties of analysing the behaviour of networks with many interior loops has, both historically and currently, encouraged the study of networks without loops called trees. Trees occur naturally in the study of games. […] because trees are easier to analyse, most artificial neural networks constructed for pattern recognition are trees. […] Evolutionary game theory makes use of the tree structure of games to study the ways in which agents can modify their strategies as they interact with other agents playing the same game. […] However, evolutionary game theory does not concern itself with the evolution of the game’s laws.”

“It has been observed that innovation in CAS is mostly a matter of combining well-known components in new ways. […] Recombination abets the formation of new cascades. […] By extracting general mechanisms that modify CAS, such as recombination, we go from examination of particular instances to a unified study of characteristic CAS properties. The mechanisms of interest act mainly on extant substructures, using them as building blocks for more complex substructures […]. Because signals and boundaries are a pervasive feature of CAS, their modification has a central role in this adaptive process.”


February 12, 2018 Posted by | Books, Computer science, Mathematics | Leave a comment

Systems Biology (II)

Some observations from the book’s chapter 3 below:

“Without regulation biological processes would become progressively more and more chaotic. In living cells the primary source of information is genetic material. Studying the role of information in biology involves signaling (i.e. spatial and temporal transfer of information) and storage (preservation of information). Regarding the role of the genome we can distinguish three specific aspects of biological processes: steady-state genetics, which ensure cell-level and body homeostasis; genetics of development, which controls cell differentiation and genesis of the organism; and evolutionary genetics, which drives speciation. […] The ever growing demand for information, coupled with limited storage capacities, has resulted in a number of strategies for minimizing the quantity of the encoded information that must be preserved by living cells. In addition to combinatorial approaches based on noncontiguous genes structure, self-organization plays an important role in cellular machinery. Nonspecific interactions with the environment give rise to coherent structures despite the lack of any overt information store. These mechanisms, honed by evolution and ubiquitous in living organisms, reduce the need to directly encode large quantities of data by adopting a systemic approach to information management.”

Information is commonly understood as a transferable description of an event or object. Information transfer can be either spatial (communication, messaging or signaling) or temporal (implying storage). […] The larger the set of choices, the lower the likelihood [of] making the correct choice by accident and — correspondingly — the more information is needed to choose correctly. We can therefore state that an increase in the cardinality of a set (the number of its elements) corresponds to an increase in selection indeterminacy. This indeterminacy can be understood as a measure of “a priori ignorance”. […] Entropy determines the uncertainty inherent in a given system and therefore represents the relative difficulty of making the correct choice. For a set of possible events it reaches its maximum value if the relative probabilities of each event are equal. Any information input reduces entropy — we can therefore say that changes in entropy are a quantitative measure of information. […] Physical entropy is highest in a state of equilibrium, i.e. lack of spontaneity (G = 0,0) which effectively terminates the given reaction. Regulatory processes which counteract the tendency of physical systems to reach equilibrium must therefore oppose increases in entropy. It can be said that a steady inflow of information is a prerequisite of continued function in any organism. As selections are typically made at the entry point of a regulatory process, the concept of entropy may also be applied to information sources. This approach is useful in explaining the structure of regulatory systems which must be “designed” in a specific way, reducing uncertainty and enabling accurate, error-free decisions.

The fire ant exudes a pheromone which enables it to mark sources of food and trace its own path back to the colony. In this way, the ant conveys pathing information to other ants. The intensity of the chemical signal is proportional to the abundance of the source. Other ants can sense the pheromone from a distance of several (up to a dozen) centimeters and thus locate the source themselves. […] As can be expected, an increase in the entropy of the information source (i.e. the measure of ignorance) results in further development of regulatory systems — in this case, receptors capable of receiving signals and processing them to enable accurate decisions. Over time, the evolution of regulatory mechanisms increases their performance and precision. The purpose of various structures involved in such mechanisms can be explained on the grounds of information theory. The primary goal is to select the correct input signal, preserve its content and avoid or eliminate any errors.”

Genetic information stored in nucleotide sequences can be expressed and transmitted in two ways:
a. via replication (in cell division);
b. via transcription and translation (also called gene expression […]
Both processes act as effectors and can be triggered by certain biological signals transferred on request.
Gene expression can be defined as a sequence of events which lead to the synthesis of proteins or their products required for a particular function. In cell division, the goal of this process is to generate a copy of the entire genetic code (S phase), whereas in gene expression only selected fragments of DNA (those involved in the requested function) are transcribed and translated. […] Transcription calls for exposing a section of the cell’s genetic code and although its product (RNA) is short-lived, it can be recreated on demand, just like a carbon copy of a printed text. On the other hand, replication affects the entire genetic material contained in the cell and must conform to stringent precision requirements, particularly as the size of the genome increases.”

The magnitude of effort involved in replication of genetic code can be visualized by comparing the DNA chain to a zipper […]. Assuming that the zipper consists of three pairs of interlocking teeth per centimeter (300 per meter) and that the human genome is made up of 3 billion […] base pairs, the total length of our uncoiled DNA in “zipper form” would be equal to […] 10,000 km […] If we were to unfasten the zipper at a rate of 1 m per second, the entire unzipping process would take approximately 3 months […]. This comparison should impress upon the reader the length of the DNA chain and the precision with which individual nucleotides must be picked to ensure that the resulting code is an exact copy of the source. It should also be noted that for each base pair the polymerase enzyme needs to select an appropriate matching nucleotide from among four types of nucleotides present in the solution, and attach it to the chain (clearly, no such problem occurs in zippers). The reliability of an average enzyme is on the order of 10-3–10-4, meaning that one error occurs for every 1,000–10,000 interactions between the enzyme and its substrate. Given this figure, replication of 3*109 base pairs would introduce approximately 3 million errors (mutations) per genome, resulting in a highly inaccurate copy. Since the observed reliability of replication is far higher, we may assume that some corrective mechanisms are involved. Really, the remarkable precision of genetic replication is ensured by DNA repair processes, and in particular by the corrective properties of polymerase itself.

Many mutations are caused by the inherent chemical instability of nucleic acids: for example, cytosine may spontaneously convert to uracil. In the human genome such an event occurs approximately 100 times per day; however uracil is not normally encountered in DNA and its presence alerts defensive mechanisms which correct the error. Another type of mutation is spontaneous depurination, which also triggers its own, dedicated error correction procedure. Cells employ a large number of corrective mechanisms […] DNA repair mechanisms may be treated as an “immune system” which protects the genome from loss or corruption of genetic information. The unavoidable mutations which sometimes occur despite the presence of error correction-mechanisms can be masked due to doubled presentation (alleles) of genetic information. Thus, most mutations are recessive and not expressed in the phenotype. As the length of the DNA chain increases, mutations become more probable. It should be noted that the number of nucleotides in DNA is greater than the relative number of aminoacids participating in polypeptide chains. This is due to the fact that each aminoacid is encoded by exactly three nucleotides — a general principle which applies to all living organisms. […] Fidelity is, of course, fundamentally important in DNA replication as any harmful mutations introduced in its course are automatically passed on to all successive generations of cells. In contrast, transcription and translation processes can be more error-prone as their end products are relatively short-lived. Of note is the fact that faulty transcripts appear in relatively low quantities and usually do not affect cell functions, since regulatory processes ensure continued synthesis of the required substances until a suitable level of activity is reached. Nevertheless, it seems that reliable transcription of genetic material is sufficiently significant for cells to have developed appropriate proofreading mechanisms, similar to those which assist replication. […] the entire information pathway — starting with DNA and ending with active proteins — is protected against errors. We can conclude that fallibility is an inherent property of genetic information channels, and that in order to perform their intended function, these channels require error correction mechanisms.”

The discrete nature of genetic material is an important property which distinguishes prokaryotes from eukaryotes. […] The ability to select individual nucleotide fragments and construct sequences from predetermined “building blocks” results in high adaptability to environmental stimuli and is a fundamental aspect of evolution. The discontinuous nature of genes is evidenced by the presence of fragments which do not convey structural information (introns), as opposed to structure-encoding fragments (exons). The initial transcript (pre-mRNA) contains introns as well as exons. In order to provide a template for protein synthesis, it must undergo further processing (also known as splicing): introns must be cleaved and exon fragments attached to one another. […] Recognition of intron-exon boundaries is usually very precise, while the reattachment of adjacent exons is subject to some variability. Under certain conditions, alternative splicing may occur, where the ordering of the final product does not reflect the order in which exon sequences appear in the source chain. This greatly increases the number of potential mRNA combinations and thus the variety of resulting proteins. […] While access to energy sources is not a major problem, sources of information are usually far more difficult to manage — hence the universal tendency to limit the scope of direct (genetic) information storage. Reducing the length of genetic code enables efficient packing and enhances the efficiency of operations while at the same time decreasing the likelihood of errors. […] The number of genes identified in the human genome is lower than the number of distinct proteins by a factor of 4; a difference which can be attributed to alternative splicing. […] This mechanism increases the variety of protein structures without affecting core information storage, i.e. DNA sequences. […] Primitive organisms often possess nearly as many genes as humans, despite the essential differences between both groups. Interspecies diversity is primarily due to the properties of regulatory sequences.”

The discontinuous nature of genes is evolutionarily advantageous but comes at the expense of having to maintain a nucleus where such splicing processes can be safely conducted, in addition to efficient transport channels allowing transcripts to penetrate the nuclear membrane. While it is believed that at early stages of evolution RNA was the primary repository of genetic information, its present function can best be described as an information carrier. Since unguided proteins cannot ensure sufficient specificity of interaction with nucleic acids, protein-RNA complexes are used often in cases where specific fragments of genetic information need to be read. […] The use of RNA in protein complexes is common across all domains of the living world as it bridges the gap between discrete and continuous storage of genetic information.”

Epigenetic differentiation mechanisms are particularly important in embryonic development. […] Unlike the function of mature organisms, embryonic programming refers to structures which do not yet exist but which need to be created through cell proliferation and differentiation. […] Differentiation of cells results in phenotypic changes. This phenomenon is the primary difference between development genetics and steady-state genetics. Functional differences are not, however, associated with genomic changes: instead they are mediated by the transcriptome where certain genes are preferentially selected for transcription while others are suppressed. […] In a mature, specialized cell only a small portion of the transcribable genome is actually expressed. The remainder of the cell’s genetic material is said to be silenced. Gene silencing is a permanent condition. Under normal circumstances mature cells never alter their function, although such changes may be forced in a laboratory setting […] Cells which make up the embryo at a very early stage of development are pluripotent, meaning that their purpose can be freely determined and that all of their genetic information can potentially be expressed (under certain conditions). […] At each stage of the development process the scope of pluripotency is reduced until, ultimately, the cell becomes monopotent. Monopotency implies that the final function of the cell has already been determined, although the cell itself may still be immature. […] functional dissimilarities between specialized cells are not associated with genetic mutations but rather with selective silencing of genes. […] Most genes which determine biological functions have a biallelic representation (i.e. a representation consisting of two alleles). The remainder (approximately 10 % of genes) is inherited from one specific parent, as a result of partial or complete silencing of their sister alleles (called paternal or maternal imprinting) which occurs during gametogenesis. The suppression of a single copy of the X chromosome is a special case of this phenomenon.”

Evolutionary genetics is subject to two somewhat contradictory criteria. On the one hand, there is clear pressure on accurate and consistent preservation of biological functions and structures while on the other hand it is also important to permit gradual but persistent changes. […] the observable progression of adaptive traits which emerge as a result of evolution suggests a mechanism which promotes constructive changes over destructive ones. Mutational diversity cannot be considered truly random if it is limited to certain structures or functions. […] Approximately 50 % of the human genome consists of mobile segments, capable of migrating to various positions in the genome. These segments are called transposons and retrotransposons […] The mobility of genome fragments not only promotes mutations (by increasing the variability of DNA) but also affects the stability and packing of chromatin strands wherever such mobile sections are reintegrated with the genome. Under normal circumstances the activity of mobile sections is tempered by epigenetic mechanisms […]; however in certain situations gene mobility may be upregulated. In particular, it seems that in “prehistoric” (remote evolutionary) times such events occurred at a much faster pace, accelerating the rate of genetic changes and promoting rapid evolution. Cells can actively promote mutations by way of the so-called AID process (activity-dependent cytosine deamination). It is an enzymatic mechanism which converts cytosine into uracil, thereby triggering repair mechanisms and increasing the likelihood of mutations […] The existence of AID proves that cells themselves may trigger evolutionary changes and that the role of mutations in the emergence of new biological structures is not strictly passive.”

Regulatory mechanisms which receive signals characterized by high degrees of uncertainty, must be able to make informed choices to reduce the overall entropy of the system they control. This property is usually associated with development of information channels. Special structures ought to be exposed within information channels connecting systems of different character as for example linking transcription to translation or enabling transduction of signals through the cellular membrane. Examples of structures which convey highly entropic information are receptor systems associated with blood coagulation and immune responses. The regulatory mechanism which triggers an immune response relies on relatively simple effectors (complement factor enzymes, phages and killer cells) coupled to a highly evolved receptor system, represented by specific antibodies and organized set of cells. Compared to such advanced receptors the structures which register the concentration of a given product (e.g. glucose in blood) are rather primitive. Advanced receptors enable the immune system to recognize and verify information characterized by high degrees of uncertainty. […] In sequential processes it is usually the initial stage which poses the most problems and requires the most information to complete successfully. It should come as no surprise that the most advanced control loops are those associated with initial stages of biological pathways.”


February 10, 2018 Posted by | Biology, Books, Chemistry, Evolutionary biology, Genetics, Immunology, Medicine | Leave a comment

Endocrinology (part 4 – reproductive endocrinology)

Some observations from chapter 4 of the book below.

“*♂. The whole process of spermatogenesis takes approximately 74 days, followed by another 12-21 days for sperm transport through the epididymis. This means that events which may affect spermatogenesis may not be apparent for up to three months, and successful induction of spermatogenesis treatment may take 2 years. *♀. From primordial follicle to primary follicle, it takes about 180 days (a continuous process). It is then another 60 days to form a preantral follicle which then proceeds to ovulation three menstrual cycles later. Only the last 2-3 weeks of this process is under gonadotrophin drive, during which time the follicle grows from 2 to 20mm.”

“Hirsutism (not a diagnosis in itself) is the presence of excess hair growth in ♀ as a result of androgen production and skin sensitivity to androgens. […] In ♀, testosterone is secreted primarily by the ovaries and adrenal glands, although a significant amount is produced by the peripheral conversion of androstenedione and DHEA. Ovarian androgen production is regulated by luteinizing hormone, whereas adrenal production is ACTH-dependent. The predominant androgens produced by the ovaries are testosterone and androstenedione, and the adrenal glands are the main source of DHEA. Circulating testosterone is mainly bound to sex hormone-binding globulin (SHBG), and it is the free testosterone which is biologically active. […] Slowly progressive hirsutism following puberty suggests a benign cause, whereas rapidly progressive hirsutism of recent onset requires further immediate investigation to rule out an androgen-secreting neoplasm. [My italics, US] […] Serum testosterone should be measured in all ♀ presenting with hirsutism. If this is <5nmol/L, then the risk of a sinister cause for her hirsutism is low.”

“Polycystic ovary syndrome (PCOS) *A heterogeneous clinical syndrome characterized by hyperandrogenism, mainly of ovarian origin, menstrual irregularity, and hyperinsulinaemia, in which other causes of androgen excess have been excluded […] *A distinction is made between polycystic ovary morphology on ultrasound (PCO which also occurs in congenital adrenal hyperplasia, acromegaly, Cushing’s syndrome, and testesterone-secreting tumours) and PCOS – the syndrome. […] PCOS is the most common endocrinopathy in ♀ of reproductive age; >95% of ♀ presenting to outpatients with hirsutism have PCOS. *The estimated prevalence of PCOS ranges from 5 to 10% on clinical criteria. Polycystic ovaries on US alone are present in 20-25% of ♀ of reproductive age. […] family history of type 2 diabetes mellitus is […] more common in ♀ with PCOS. […] Approximately 70% of ♀ with PCOS are insulin-resistant, depending on the definition. […] Type 2 diabetes mellitus is 2-4 x more common in ♀ with PCOS. […] Hyperinsulinaemia is exacerbated by obesity but can also be present in lean ♀ with PCOS. […] Insulin […] inhibits SHBG synthesis by the liver, with a consequent rise in free androgen levels. […] Symptoms often begin around puberty, after weight gain, or after stopping the oral contraceptive pill […] Oligo-/amenorrhoea [is present in] 70% […] Hirsutism [is present in] 66% […] Obesity [is present in] 50% […] *Infertility (30%). PCOS accounts for 75% of cases of anovulatory infertility. The risk of spontaneous miscarriage is also thought to be higher than the general population, mainly because of obesity. […] The aims of investigations [of PCOS] are mainly to exclude serious underlying disorders and to screen for complications, as the diagnosis is primarily clinical […] Studies have uniformly shown that weight reduction in obese ♀ with PCOS will improve insulin sensitivity and significantly reduce hyperandrogenaemia. Obese ♀ are less likely to respond to antiandrogens and infertility treatment.”

“Androgen-secreting tumours [are] [r]are tumours of the ovary or adrenal gland which may be benign or malignant, which cause virilization in ♀ through androgen production. […] Virilization […] [i]ndicates severe hyperandrogenism, is associated with clitoromegaly, and is present in 98% of ♀ with androgen-producing tumours. Not usually a feature of PCOS. […] Androgen-secreting ovarian tumours[:] *75% develop before the age of 40 years. *Account for 0.4% of all ovarian tumours; 20% are malignant. *Tumours are 5-25cm in size. The larger they are, the more likely they are to be malignant. They are rarely bilateral. […] Androgen-secreting adrenal tumours[:] *50% develop before the age of 50 years. *Larger tumours […] are more likely to be malignant. *Usually with concomitant cortisol secretion as a variant of Cushing’s syndrome. […] Symptoms and signs of Cushing’s syndrome are present in many of ♀ with adrenal tumours. […] Onset of symptoms. Usually recent onset of rapidly progressive symptoms. […] Malignant ovarian and adrenal androgen-secreting tumours are usually resistant to chemotherapy and radiotherapy. […] *Adrenal tumours. 20% 5-year survival. Most have metastatic disease at the time of surgery. *Ovarian tumours. 30% disease-free survival and 40% overall survival at 5 years. […] Benign tumours. *Prognosis excellent. *Hirsutism improves post-operatively, but clitoromegaly, male pattern balding, and deep voice may persist.”

*Oligomenorrhoea is defined as the reduction in the frequency of menses to <9 periods a year. *1° amenorrhoea is the failure of menarche by the age of 16 years. Prevalence ~0.3% *2° amenorrhoea refers to the cessation of menses for >6 months in ♀ who had previously menstruated. Prevalence ~3%. […] Although the list of causes is long […], the majority of cases of secondary amenorrhoea can be accounted for by four conditions: *Polycystic ovary syndrome. *Hypothalamic amenorrhoea. *Hyperprolactinaemia. *Ovarian failure. […] PCOS is the only common endocrine cause of amenorrhoea with normal oestrogenization – all other causes are oestrogen-deficient. Women with PCOS, therefore, are at risk of endometrial hyperplasia, and all others are at risk of osteoporosis. […] Anosmia may indicate Kallman’s syndrome. […] In routine practice, a common differential diagnosis is between mild version of PCOS and hypothalamic amenorrhoea. The distinction between these conditions may require repeated testing, as a single snapshot may not discriminate. The reason to be precise is that PCOS is oestrogen-replete and will, therefore, respond to clomiphene citrate (an antioestrogen) for fertility. HA will be oestrogen-deficient and will need HRT and ovulation induction with pulsatile GnRH or hMG [human Menopausal Gonadotropins – US]. […] […] 75% of ♀ who develop 2° amenorrhoea report hot flushes, night sweats, mood changes, fatigue, or dyspareunia; symptoms may precede the onset of menstrual disturbances.”

“POI [Premature Ovarian Insufficiency] is a disorder characterized by amenorrhoea, oestrogen deficiency, and elevated gonadotrophins, developing in ♀ <40 years, as a result of loss of ovarian follicular function. […] *Incidence – 0.1% of ♀ <30 years and 1% of those <40 years. *Accounts for 10% of all cases of 2° amenorrhoea. […] POI is the result of accelerated depletion of ovarian germ cells. […] POI is usually permanent and progressive, although a remitting course is also experienced and cannot be fully predicted, so all women must know that pregnancy is possible, even though fertility treatments are not effective (often a difficult paradox to describe). Spontaneous pregnancy has been reported in 5%. […] 80% of [women with Turner’s syndrome] have POI. […] All ♀ presenting with hypergonadotrophic amenorrhoea below age 40 should be karyotyped.”

“The menopause is the permanent cessation of menstruation as a result of ovarian failure and is a retrospective diagnosis made after 12 months of amenorrhoea. The average age of at the time of the menopause is ~50 years, although smokers reach the menopause ~2 years earlier. […] Cycles gradually become increasingly anovulatory and variable in length (often shorter) from about 4 years prior to the menopause. Oligomenorrhoea often precedes permanent amenorrhoea. in 10% of ♀, menses cease abruptly, with no preceding transitional period. […] During the perimenopausal period, there is an accelerated loss of bone mineral density (BMD), rendering post-menopausal more susceptible to osteoporotic fractures. […] Post-menopausal are 2-3 x more likely to develop IHD [ischaemic heart disease] than premenopausal , even after age adjustments. The menopause is associated with an increase in risk factors for atherosclerosis, including less favourable lipid profile, insulin sensitivity, and an ↑ thrombotic tendency. […] ♀ are 2-3 x more likely to develop Alzheimer’s disease than ♂. It is suggested that oestrogen deficiency may play a role in the development of dementia. […] The aim of treatment of perimenopausal ♀ is to alleviate menopausal symptoms and optimize quality of life. The majority of women with mild symptoms require no HRT. […] There is an ↑ risk of breast cancer in HRT users which is related to the duration of use. The risk increases by 35%, following 5 years of use (over the age of 50), and falls to never-used risk 5 years after discontinuing HRT. For ♀ aged 50 not using HRT, about 45 in every 1,000 will have cancer diagnosed over the following 20 years. This number increases to 47/1,000 ♀ using HRT for 5 years, 51/1,000 using HRT for 10 years, and 57/1,000 after 15 years of use. The risk is highest in ♀ on combined HRT compared with oestradiol alone. […] Oral HRT increases the risk [of venous thromboembolism] approximately 3-fold, resulting in an extra two cases/10,000 women-years. This risk is markedly ↑ in ♀ who already have risk factors for DVT, including previous DVT, cardiovascular disease, and within 90 days of hospitalization. […] Data from >30 observational studies suggest that HRT may reduce the risk of developing CVD [cardiovascular disease] by up to 50%. However, randomized placebo-controlled trials […] have failed to show that HRT protects against IHD. Currently, HRT should not be prescribed to prevent cardiovascular disease.”

“Any chronic illness may affect testicular function, in particular chronic renal failure, liver cirrhosis, and haemochromatosis. […] 25% of  who develop mumps after puberty have associated orchitis, and 25-50% of these will develop 1° testicular failure. […] Alcohol excess will also cause 1° testicular failure. […] Cytotoxic drugs, particularly alkylating agents, are gonadotoxic. Infertility occurs in 50% of patients following chemotherapy, and a significant number of  require androgen replacement therapy because of low testosterone levels. […] Testosterone has direct anabolic effects on skeletal muscle and has been shown to increase muscle mass and strength when given to hypogonadal men. Lean body mass is also with a reduction in fat mass. […] Hypogonadism is a risk factor for osteoporosis. Testosterone inhibits bone resorption, thereby reducing bone turnover. Its administration to hypogonadal has been shown to improve bone mineral density and reduce the risk of developing osteoporosis. […] *Androgens stimulate prostatic growth, and testosterone replacement therapy may therefore induce symptoms of bladder outflow obstruction in with prostatic hypertrophy. *It is unlikely that testosterone increases the risk of developing prostrate cancer, but it may promote the growth of an existing cancer. […] Testosterone replacement therapy may cause a fall in both LDL and HDL cholesterol levels, the significance of which remains unclear. The effect of androgen replacement therapy on the risk of developing coronary artery disease is unknown.”

“Erectile dysfunction [is] [t]he consistent inability to achieve or maintain an erect penis sufficient for satisfactory sexual intercourse. Affects approximately 10% of and >50% of >70 years. […] Erectile dysfunction may […] occur as a result of several mechanisms: *Neurological damage. *Arterial insufficiency. *Venous incompetence. *Androgen deficiency. *Penile abnormalities. […] *Abrupt onset of erectile dysfunction which is intermittent is often psychogenic in origin. *Progressive and persistent dysfunction indicates an organic cause. […] Absence of morning erections suggests an organic cause of erectile dysfunction.”

“*Infertility, defined as failure of pregnancy after 1 year of unprotected regular (2 x week) sexual intercourse, affects ~10% of all couples. *Couples who fail to conceive after 1 years of regular unprotected sexual intercourse should be investigated. […] Causes[:] *♀ factors (e.g. PCOS, tubal damage) 35%. *♂ factors (idiopathic gonadal failure in 60%) 25%. *Combined factors 25%. *Unexplained infertility 15%. […] [♀] Fertility declines rapidly after the age of 36 years. […] Each episode of acute PID causes infertility in 10-15% of cases. *Trachomatis is responsible for half the cases of PID in developed countries. […] Unexplained infertility [is] [i]nfertility despite normal sexual intercourse occurring at least twice weakly, normal semen analysis, documentation of ovulation in several cycles, and normal patent tubes (by laparoscopy). […] 30-50% will become pregnant within 3 years of expectant management. If not pregnant by then, chances that spontaneous pregnancy will occur are greatly reduced, and ART should be considered. In ♀>34 years of age, then expectant management is not an option, and up to six cycles of IUI or IVF should be considered.”


February 9, 2018 Posted by | Books, Cancer/oncology, Cardiology, Diabetes, Genetics, Medicine, Pharmacology | Leave a comment

Systems Biology (I)

This book is really dense and is somewhat tough for me to blog. One significant problem is that: “The authors assume that the reader is already familiar with the material covered in a classic biochemistry course.” I know enough biochem to follow most of the stuff in this book, and I was definitely quite happy to have recently read John Finney’s book on the biochemical properties of water and Christopher Hall’s introduction to materials science, as both of those books’ coverage turned out to be highly relevant (these are far from the only relevant books I’ve read semi-recently – Atkins introduction to thermodynamics is another book that springs to mind) – but even so, what do you leave out when writing a post like this? I decided to leave out a lot. Posts covering books like this one are hard to write because it’s so easy for them to blow up in your face because you have to include so many details for the material included in the post to even start to make sense to people who didn’t read the original text. And if you leave out all the details, what’s really left? It’s difficult..

Anyway, some observations from the first chapters of the book below.

“[T]he biological world consists of self-managing and self-organizing systems which owe their existence to a steady supply of energy and information. Thermodynamics introduces a distinction between open and closed systems. Reversible processes occurring in closed systems (i.e. independent of their environment) automatically gravitate toward a state of equilibrium which is reached once the velocity of a given reaction in both directions becomes equal. When this balance is achieved, we can say that the reaction has effectively ceased. In a living cell, a similar condition occurs upon death. Life relies on certain spontaneous processes acting to unbalance the equilibrium. Such processes can only take place when substrates and products of reactions are traded with the environment, i.e. they are only possible in open systems. In turn, achieving a stable level of activity in an open system calls for regulatory mechanisms. When the reaction consumes or produces resources that are exchanged with the outside world at an uneven rate, the stability criterion can only be satisfied via a negative feedback loop […] cells and living organisms are thermodynamically open systems […] all structures which play a role in balanced biological activity may be treated as components of a feedback loop. This observation enables us to link and integrate seemingly unrelated biological processes. […] the biological structures most directly involved in the functions and mechanisms of life can be divided into receptors, effectors, information conduits and elements subject to regulation (reaction products and action results). Exchanging these elements with the environment requires an inflow of energy. Thus, living cells are — by their nature — open systems, requiring an energy source […] A thermodynamically open system lacking equilibrium due to a steady inflow of energy in the presence of automatic regulation is […] a good theoretical model of a living organism. […] Pursuing growth and adapting to changing environmental conditions calls for specialization which comes at the expense of reduced universality. A specialized cell is no longer self-sufficient. As a consequence, a need for higher forms of intercellular organization emerges. The structure which provides cells with suitable protection and ensures continued homeostasis is called an organism.”

“In biology, structure and function are tightly interwoven. This phenomenon is closely associated with the principles of evolution. Evolutionary development has produced structures which enable organisms to develop and maintain its architecture, perform actions and store the resources needed to survive. For this reason we introduce a distinction between support structures (which are akin to construction materials), function-related structures (fulfilling the role of tools and machines), and storage structures (needed to store important substances, achieving a compromise between tight packing and ease of access). […] Biology makes extensive use of small-molecule structures and polymers. The physical properties of polymer chains make them a key building block in biological structures. There are several reasons as to why polymers are indispensable in nature […] Sequestration of resources is subject to two seemingly contradictory criteria: 1. Maximize storage density; 2. Perform sequestration in such a way as to allow easy access to resources. […] In most biological systems, storage applies to energy and information. Other types of resources are only occasionally stored […]. Energy is stored primarily in the form of saccharides and lipids. Saccharides are derivatives of glucose, rendered insoluble (and thus easy to store) via polymerization.Their polymerized forms, stabilized with α-glycosidic bonds, include glycogen (in animals) and starch (in plantlife). […] It should be noted that the somewhat loose packing of polysaccharides […] makes them unsuitable for storing large amounts of energy. In a typical human organism only ca. 600 kcal of energy is stored in the form of glycogen, while (under normal conditions) more than 100,000 kcal exists as lipids. Lipids deposit usually assume the form of triglycerides (triacylglycerols). Their properties can be traced to the similarities between fatty acids and hydrocarbons. Storage efficiency (i.e. the amount of energy stored per unit of mass) is twice that of polysaccharides, while access remains adequate owing to the relatively large surface area and high volume of lipids in the organism.”

“Most living organisms store information in the form of tightly-packed DNA strands. […] It should be noted that only a small percentage of DNA (about few %) conveys biologically relevant information. The purpose of the remaining ballast is to enable suitable packing and exposure of these important fragments. If all of DNA were to consist of useful code, it would be nearly impossible to devise a packing strategy guaranteeing access to all of the stored information.”

“The seemingly endless diversity of biological functions frustrates all but the most persistent attempts at classification. For the purpose of this handbook we assume that each function can be associated either with a single cell or with a living organism. In both cases, biological functions are strictly subordinate to automatic regulation, based — in a stable state — on negative feedback loops, and in processes associated with change (for instance in embryonic development) — on automatic execution of predetermined biological programs. Individual components of a cell cannot perform regulatory functions on their own […]. Thus, each element involved in the biological activity of a cell or organism must necessarily participate in a regulatory loop based on processing information.”

“Proteins are among the most basic active biological structures. Most of the well-known proteins studied thus far perform effector functions: this group includes enzymes, transport proteins, certain immune system components (complement factors) and myofibrils. Their purpose is to maintain biological systems in a steady state. Our knowledge of receptor structures is somewhat poorer […] Simple structures, including individual enzymes and components of multienzyme systems, can be treated as “tools” available to the cell, while advanced systems, consisting of many mechanically-linked tools, resemble machines. […] Machinelike mechanisms are readily encountered in living cells. A classic example is fatty acid synthesis, performed by dedicated machines called synthases. […] Multiunit structures acting as machines can be encountered wherever complex biochemical processes need to be performed in an efficient manner. […] If the purpose of a machine is to generate motion then a thermally powered machine can accurately be called a motor. This type of action is observed e.g. in myocytes, where transmission involves reordering of protein structures using the energy generated by hydrolysis of high-energy bonds.”

“In biology, function is generally understood as specific physiochemical action, almost universally mediated by proteins. Most such actions are reversible which means that a single protein molecule may perform its function many times. […] Since spontaneous noncovalent surface interactions are very infrequent, the shape and structure of active sites — with high concentrations of hydrophobic residues — makes them the preferred area of interaction between functional proteins and their ligands. They alone provide the appropriate conditions for the formation of hydrogen bonds; moreover, their structure may determine the specific nature of interaction. The functional bond between a protein and a ligand is usually noncovalent and therefore reversible.”

“In general terms, we can state that enzymes accelerate reactions by lowering activation energies for processes which would otherwise occur very slowly or not at all. […] The activity of enzymes goes beyond synthesizing a specific protein-ligand complex (as in the case of antibodies or receptors) and involves an independent catalytic attack on a selected bond within the ligand, precipitating its conversion into the final product. The relative independence of both processes (binding of the ligand in the active site and catalysis) is evidenced by the phenomenon of noncompetitive inhibition […] Kinetic studies of enzymes have provided valuable insight into the properties of enzymatic inhibitors — an important field of study in medicine and drug research. Some inhibitors, particularly competitive ones (i.e. inhibitors which outcompete substrates for access to the enzyme), are now commonly used as drugs. […] Physical and chemical processes may only occur spontaneously if they generate energy, or non-spontaneously if they consume it. However, all processes occurring in a cell must have a spontaneous character because only these processes may be catalyzed by enzymes. Enzymes merely accelerate reactions; they do not provide energy. […] The change in enthalpy associated with a chemical process may be calculated as a net difference in the sum of molecular binding energies prior to and following the reaction. Entropy is a measure of the likelihood that a physical system will enter a given state. Since chaotic distribution of elements is considered the most probable, physical systems exhibit a general tendency to gravitate towards chaos. Any form of ordering is thermodynamically disadvantageous.”

“The chemical reactions which power biological processes are characterized by varying degrees of efficiency. In general, they tend to be on the lower end of the efficiency spectrum, compared to energy sources which drive matter transformation processes in our universe. In search for a common criterion to describe the efficiency of various energy sources, we can refer to the net loss of mass associated with a release of energy, according to Einstein’s formula:
E = mc2
M/M coefficient (relative loss of mass, given e.g. in %) allows us to compare the efficiency of energy sources. The most efficient processes are those involved in the gravitational collapse of stars. Their efficiency may reach 40 %, which means that 40 % of the stationary mass of the system is converted into energy. In comparison, nuclear reactions have an approximate efficiency of 0.8 %. The efficiency of chemical energy sources available to biological systems is incomparably lower and amounts to approximately 10(-7) % […]. Among chemical reactions, the most potent sources of energy are found in oxidation processes, commonly exploited by biological systems. Oxidation tends  to result in the largest net release of energy per unit of mass, although the efficiency of specific types of oxidation varies. […] given unrestricted access to atmospheric oxygen and to hydrogen atoms derived from hydrocarbons — the combustion of hydrogen (i.e. the synthesis of water; H2 + 1/2O2 = H2O) has become a principal source of energy in nature, next to photosynthesis, which exploits the energy of solar radiation. […] The basic process associated with the release of hydrogen and its subsequent oxidation (called the Krebs cycle) is carried by processes which transfer electrons onto oxygen atoms […]. Oxidation occurs in stages, enabling optimal use of the released energy. An important byproduct of water synthesis is the universal energy carrier known as ATP (synthesized separately). As water synthesis is a highly spontaneous process, it can be exploited to cover the energy debt incurred by endergonic synthesis of ATP, as long as both processes are thermodynamically coupled, enabling spontaneous catalysis of anhydride bonds in ATP. Water synthesis is a universal source of energy in heterotrophic systems. In contrast, autotrophic organisms rely on the energy of light which is exploited in the process of photosynthesis. Both processes yield ATP […] Preparing nutrients (hydrogen carriers) for participation in water synthesis follows different paths for sugars, lipids and proteins. This is perhaps obvious given their relative structural differences; however, in all cases the final form, which acts as a substrate for dehydrogenases, is acetyl-CoA“.

“Photosynthesis is a process which — from the point of view of electron transfer — can be treated as a counterpart of the respiratory chain. In heterotrophic organisms, mitochondria transport electrons from hydrogenated compounds (sugars, lipids, proteins) onto oxygen molecules, synthesizing water in the process, whereas in the course of photosynthesis electrons released by breaking down water molecules are used as a means of reducing oxydised carbon compounds […]. In heterotrophic organisms the respiratory chain has a spontaneous quality (owing to its oxidative properties); however any reverse process requires energy to occur. In the case of photosynthesis this energy is provided by sunlight […] Hydrogen combustion and photosynthesis are the basic sources of energy in the living world. […] For an energy source to become useful, non-spontaneous reactions must be coupled to its operation, resulting in a thermodynamically unified system. Such coupling can be achieved by creating a coherent framework in which the spontaneous and non-spontaneous processes are linked, either physically or chemically, using a bridging component which affects them both. If the properties of both reactions are different, the bridging component must also enable suitable adaptation and mediation. […] Direct exploitation of the energy released via the hydrolysis of ATP is possible usually by introducing an active binding carrier mediating the energy transfer. […] Carriers are considered active as long as their concentration ensures a sufficient release of energy to synthesize a new chemical bond by way of a non-spontaneous process. Active carriers are relatively short-lived […] Any active carrier which performs its function outside of the active site must be sufficiently stable to avoid breaking up prior to participating in the synthesis reaction. Such mobile carriers are usually produced when the required synthesis consists of several stages or cannot be conducted in the active site of the enzyme for sterical reasons. Contrary to ATP, active energy carriers are usually reaction-specific. […] Mobile energy carriers are usually formed as a result of hydrolysis of two high-energy ATP bonds. In many cases this is the minimum amount of energy required to power a reaction which synthesizes a single chemical bond. […] Expelling a mobile or unstable reaction component in order to increase the spontaneity of active energy carrier synthesis is a process which occurs in many biological mechanisms […] The action of active energy carriers may be compared to a ball rolling down a hill. The descending snowball gains sufficient energy to traverse another, smaller mound, adjacent to its starting point. In our case, the smaller hill represents the final synthesis reaction […] Understanding the role of active carriers is essential for the study of metabolic processes.”

“A second category of processes, directly dependent on energy sources, involves structural reconfiguration of proteins, which can be further differentiated into low and high-energy reconfiguration. Low-energy reconfiguration occurs in proteins which form weak, easily reversible bonds with ligands. In such cases, structural changes are powered by the energy released in the creation of the complex. […] Important low-energy reconfiguration processes may occur in proteins which consist of subunits. Structural changes resulting from relative motion of subunits typically do not involve significant expenditures of energy. Of particular note are the so-called allosteric proteins […] whose rearrangement is driven by a weak and reversible bond between the protein and an oxygen molecule. Allosteric proteins are genetically conditioned to possess two stable structural configurations, easily swapped as a result of binding or releasing ligands. Thus, they tend to have two comparable energy minima (separated by a low threshold), each of which may be treated as a global minimum corresponding to the native form of the protein. Given such properties, even a weakly interacting ligand may trigger significant structural reconfiguration. This phenomenon is of critical importance to a variety of regulatory proteins. In many cases, however, the second potential minimum in which the protein may achieve relative stability is separated from the global minimum by a high threshold requiring a significant expenditure of energy to overcome. […] Contrary to low-energy reconfigurations, the relative difference in ligand concentrations is insufficient to cover the cost of a difficult structural change. Such processes are therefore coupled to highly exergonic reactions such as ATP hydrolysis. […]  The link between a biological process and an energy source does not have to be immediate. Indirect coupling occurs when the process is driven by relative changes in the concentration of reaction components. […] In general, high-energy reconfigurations exploit direct coupling mechanisms while indirect coupling is more typical of low-energy processes”.

Muscle action requires a major expenditure of energy. There is a nonlinear dependence between the degree of physical exertion and the corresponding energy requirements. […] Training may improve the power and endurance of muscle tissue. Muscle fibers subjected to regular exertion may improve their glycogen storage capacity, ATP production rate, oxidative metabolism and the use of fatty acids as fuel.


February 4, 2018 Posted by | Biology, Books, Chemistry, Genetics, Pharmacology, Physics | Leave a comment

Lakes (II)

(I have had some computer issues over the last couple of weeks, which was the explanation for my brief blogging hiatus, but they should be resolved by now and as I’m already starting to fall quite a bit behind in terms of my intended coverage of the books I’ve read this year I hope to get rid of some of the backlog in the days to come.)

I have added some more observations from the second half of the book, as well as some related links, below.

“[R]ecycling of old plant material is especially important in lakes, and one way to appreciate its significance is to measure the concentration of CO2, an end product of decomposition, in the surface waters. This value is often above, sometimes well above, the value to be expected from equilibration of this gas with the overlying air, meaning that many lakes are net producers of CO2 and that they emit this greenhouse gas to the atmosphere. How can that be? […] Lakes are not sealed microcosms that function as stand-alone entities; on the contrary, they are embedded in a landscape and are intimately coupled to their terrestrial surroundings. Organic materials are produced within the lake by the phytoplankton, photosynthetic cells that are suspended in the water and that fix CO2, release oxygen (O2), and produce biomass at the base of the aquatic food web. Photosynthesis also takes place by attached algae (the periphyton) and submerged water plants (aquatic macrophytes) that occur at the edge of the lake where enough sunlight reaches the bottom to allow their growth. But additionally, lakes are the downstream recipients of terrestrial runoff from their catchments […]. These continuous inputs include not only water, but also subsidies of plant and soil organic carbon that are washed into the lake via streams, rivers, groundwater, and overland flows. […] The organic carbon entering lakes from the catchment is referred to as ‘allochthonous’, meaning coming from the outside, and it tends to be relatively old […] In contrast, much younger organic carbon is available […] as a result of recent photosynthesis by the phytoplankton and littoral communities; this carbon is called ‘autochthonous’, meaning that it is produced within the lake.”

“It used to be thought that most of the dissolved organic matter (DOM) entering lakes, especially the coloured fraction, was unreactive and that it would transit the lake to ultimately leave unchanged at the outflow. However, many experiments and field observations have shown that this coloured material can be partially broken down by sunlight. These photochemical reactions result in the production of CO2, and also the degradation of some of the organic polymers into smaller organic molecules; these in turn are used by bacteria and decomposed to CO2. […] Most of the bacterial species in lakes are decomposers that convert organic matter into mineral end products […] This sunlight-driven chemistry begins in the rivers, and continues in the surface waters of the lake. Additional chemical and microbial reactions in the soil also break down organic materials and release CO2 into the runoff and ground waters, further contributing to the high concentrations in lake water and its emission to the atmosphere. In algal-rich ‘eutrophic’ lakes there may be sufficient photosynthesis to cause the drawdown of CO2 to concentrations below equilibrium with the air, resulting in the reverse flux of this gas, from the atmosphere into the surface waters.”

“There is a precarious balance in lakes between oxygen gains and losses, despite the seemingly limitless quantities in the overlying atmosphere. This balance can sometimes tip to deficits that send a lake into oxygen bankruptcy, with the O2 mostly or even completely consumed. Waters that have O2 concentrations below 2mg/L are referred to as ‘hypoxic’, and will be avoided by most fish species, while waters in which there is a complete absence of oxygen are called ‘anoxic’ and are mostly the domain for specialized, hardy microbes. […] In many temperate lakes, mixing in spring and again in autumn are the critical periods of re-oxygenation from the overlying atmosphere. In summer, however, the thermocline greatly slows down that oxygen transfer from air to deep water, and in cooler climates, winter ice-cover acts as another barrier to oxygenation. In both of these seasons, the oxygen absorbed into the water during earlier periods of mixing may be rapidly consumed, leading to anoxic conditions. Part of the reason that lakes are continuously on the brink of anoxia is that only limited quantities of oxygen can be stored in water because of its low solubility. The concentration of oxygen in the air is 209 millilitres per litre […], but cold water in equilibrium with the atmosphere contains only 9ml/L […]. This scarcity of oxygen worsens with increasing temperature (from 4°C to 30°C the solubility of oxygen falls by 43 per cent), and it is compounded by faster rates of bacterial decomposition in warmer waters and thus a higher respiratory demand for oxygen.”

“Lake microbiomes play multiple roles in food webs as producers, parasites, and consumers, and as steps into the animal food chain […]. These diverse communities of microbes additionally hold centre stage in the vital recycling of elements within the lake ecosystem […]. These biogeochemical processes are not simply of academic interest; they totally alter the nutritional value, mobility, and even toxicity of elements. For example, sulfate is the most oxidized and also most abundant form of sulfur in natural waters, and it is the ion taken up by phytoplankton and aquatic plants to meet their biochemical needs for this element. These photosynthetic organisms reduce the sulfate to organic sulfur compounds, and once they die and decompose, bacteria convert these compounds to the rotten-egg smelling gas, H2S, which is toxic to most aquatic life. In anoxic waters and sediments, this effect is amplified by bacterial sulfate reducers that directly convert sulfate to H2S. Fortunately another group of bacteria, sulfur oxidizers, can use H2S as a chemical energy source, and in oxygenated waters they convert this reduced sulfur back to its benign, oxidized, sulfate form. […] [The] acid neutralizing capacity (or ‘alkalinity’) varies greatly among lakes. Many lakes in Europe, North America, and Asia have been dangerously shifted towards a low pH because they lacked sufficient carbonate to buffer the continuous input of acid rain that resulted from industrial pollution of the atmosphere. The acid conditions have negative effects on aquatic animals, including by causing a shift in aluminium to its more soluble and toxic form Al3+. Fortunately, these industrial emissions have been regulated and reduced in most of the developed world, although there are still legacy effects of acid rain that have resulted in a long-term depletion of carbonates and associated calcium in certain watersheds.”

“Rotifers, cladocerans, and copepods are all planktonic, that is their distribution is strongly affected by currents and mixing processes in the lake. However, they are also swimmers, and can regulate their depth in the water. For the smallest such as rotifers and copepods, this swimming ability is limited, but the larger zooplankton are able to swim over an impressive depth range during the twenty-four-hour ‘diel’ (i.e. light–dark) cycle. […] the cladocerans in Lake Geneva reside in the thermocline region and deep epilimnion during the day, and swim upwards by about 10m during the night, while cyclopoid copepods swim up by 60m, returning to the deep, dark, cold waters of the profundal zone during the day. Even greater distances up and down the water column are achieved by larger animals. The opossum shrimp, Mysis (up to 25mm in length) lives on the bottom of lakes during the day and in Lake Tahoe it swims hundreds of metres up into the surface waters, although not on moon-lit nights. In Lake Baikal, one of the main zooplankton species is the endemic amphipod, Macrohectopus branickii, which grows up to 38mm in size. It can form dense swarms at 100–200m depth during the day, but the populations then disperse and rise to the upper waters during the night. These nocturnal migrations connect the pelagic surface waters with the profundal zone in lake ecosystems, and are thought to be an adaptation towards avoiding visual predators, especially pelagic fish, during the day, while accessing food in the surface waters under the cover of nightfall. […] Although certain fish species remain within specific zones of the lake, there are others that swim among zones and access multiple habitats. […] This type of fish migration means that the different parts of the lake ecosystem are ecologically connected. For many fish species, moving between habitats extends all the way to the ocean. Anadromous fish migrate out of the lake and swim to the sea each year, and although this movement comes at considerable energetic cost, it has the advantage of access to rich marine food sources, while allowing the young to be raised in the freshwater environment with less exposure to predators. […] With the converse migration pattern, catadromous fish live in freshwater and spawn in the sea.”

“Invasive species that are the most successful and do the most damage once they enter a lake have a number of features in common: fast growth rates, broad tolerances, the capacity to thrive under high population densities, and an ability to disperse and colonize that is enhanced by human activities. Zebra mussels (Dreissena polymorpha) get top marks in each of these categories, and they have proven to be a troublesome invader in many parts of the world. […] A single Zebra mussel can produce up to one million eggs over the course of a spawning season, and these hatch into readily dispersed larvae (‘veligers’), that are free-swimming for up to a month. The adults can achieve densities up to hundreds of thousands per square metre, and their prolific growth within water pipes has been a serious problem for the cooling systems of nuclear and thermal power stations, and for the intake pipes of drinking water plants. A single Zebra mussel can filter a litre a day, and they have the capacity to completely strip the water of bacteria and protists. In Lake Erie, the water clarity doubled and diatoms declined by 80–90 per cent soon after the invasion of Zebra mussels, with a concomitant decline in zooplankton, and potential impacts on planktivorous fish. The invasion of this species can shift a lake from dominance of the pelagic to the benthic food web, but at the expense of native unionid clams on the bottom that can become smothered in Zebra mussels. Their efficient filtering capacity may also cause a regime shift in primary producers, from turbid waters with high concentrations of phytoplankton to a clearer lake ecosystem state in which benthic water plants dominate.”

“One of the many distinguishing features of H2O is its unusually high dielectric constant, meaning that it is a strongly polar solvent with positive and negative charges that can stabilize ions brought into solution. This dielectric property results from the asymmetrical electron cloud over the molecule […] and it gives liquid water the ability to leach minerals from rocks and soils as it passes through the ground, and to maintain these salts in solution, even at high concentrations. Collectively, these dissolved minerals produce the salinity of the water […] Sea water is around 35ppt, and its salinity is mainly due to the positively charged ions sodium (Na+), potassium (K+), magnesium (Mg2+), and calcium (Ca2+), and the negatively charged ions chloride (Cl), sulfate (SO42-), and carbonate CO32-). These solutes, collectively called the ‘major ions’, conduct electrons, and therefore a simple way to track salinity is to measure the electrical conductance of the water between two electrodes set a known distance apart. Lake and ocean scientists now routinely take profiles of salinity and temperature with a CTD: a submersible instrument that records conductance, temperature, and depth many times per second as it is lowered on a rope or wire down the water column. Conductance is measured in Siemens (or microSiemens (µS), given the low salt concentrations in freshwater lakes), and adjusted to a standard temperature of 25°C to give specific conductivity in µS/cm. All freshwater lakes contain dissolved minerals, with specific conductivities in the range 50–500µS/cm, while salt water lakes have values that can exceed sea water (about 50,000µS/cm), and are the habitats for extreme microbes”.

“The World Register of Dams currently lists 58,519 ‘large dams’, defined as those with a dam wall of 15m or higher; these collectively store 16,120km3 of water, equivalent to 213 years of flow of Niagara Falls on the USA–Canada border. […] Around a hundred large dam projects are in advanced planning or construction in Africa […]. More than 300 dams are planned or under construction in the Amazon Basin of South America […]. Reservoirs have a number of distinguishing features relative to natural lakes. First, the shape (‘morphometry’) of their basins is rarely circular or oval, but instead is often dendritic, with a tree-like main stem and branches ramifying out into the submerged river valleys. Second, reservoirs typically have a high catchment area to lake area ratio, again reflecting their riverine origins. For natural lakes, this ratio is relatively low […] These proportionately large catchments mean that reservoirs have short water residence times, and water quality is much better than might be the case in the absence of this rapid flushing. Nonetheless, noxious algal blooms can develop and accumulate in isolated bays and side-arms, and downstream next to the dam itself. Reservoirs typically experience water level fluctuations that are much larger and more rapid than in natural lakes, and this limits the development of littoral plants and animals. Another distinguishing feature of reservoirs is that they often show a longitudinal gradient of conditions. Upstream, the river section contains water that is flowing, turbulent, and well mixed; this then passes through a transition zone into the lake section up to the dam, which is often the deepest part of the lake and may be stratified and clearer due to decantation of land-derived particles. In some reservoirs, the water outflow is situated near the base of the dam within the hypolimnion, and this reduces the extent of oxygen depletion and nutrient build-up, while also providing cool water for fish and other animal communities below the dam. There is increasing attention being given to careful regulation of the timing and magnitude of dam outflows to maintain these downstream ecosystems. […] The downstream effects of dams continue out into the sea, with the retention of sediments and nutrients in the reservoir leaving less available for export to marine food webs. This reduction can also lead to changes in shorelines, with a retreat of the coastal delta and intrusion of seawater because natural erosion processes can no longer be offset by resupply of sediments from upstream.”

“One of the most serious threats facing lakes throughout the world is the proliferation of algae and water plants caused by eutrophication, the overfertilization of waters with nutrients from human activities. […] Nutrient enrichment occurs both from ‘point sources’ of effluent discharged via pipes into the receiving waters, and ‘nonpoint sources’ such the runoff from roads and parking areas, agricultural lands, septic tank drainage fields, and terrain cleared of its nutrient- and water-absorbing vegetation. By the 1970s, even many of the world’s larger lakes had begun to show worrying signs of deterioration from these sources of increasing enrichment. […] A sharp drop in water clarity is often among the first signs of eutrophication, although in forested areas this effect may be masked for many years by the greater absorption of light by the coloured organic materials that are dissolved within the lake water. A drop in oxygen levels in the bottom waters during stratification is another telltale indicator of eutrophication, with the eventual fall to oxygen-free (anoxic) conditions in these lower strata of the lake. However, the most striking impact with greatest effect on ecosystem services is the production of harmful algal blooms (HABs), specifically by cyanobacteria. In eutrophic, temperate latitude waters, four genera of bloom-forming cyanobacteria are the usual offenders […]. These may occur alone or in combination, and although each has its own idiosyncratic size, shape, and lifestyle, they have a number of impressive biological features in common. First and foremost, their cells are typically full of hydrophobic protein cases that exclude water and trap gases. These honeycombs of gas-filled chambers, called ‘gas vesicles’, reduce the density of the cells, allowing them to float up to the surface where there is light available for growth. Put a drop of water from an algal bloom under a microscope and it will be immediately apparent that the individual cells are extremely small, and that the bloom itself is composed of billions of cells per litre of lake water.”

“During the day, the [algal] cells capture sunlight and produce sugars by photosynthesis; this increases their density, eventually to the point where they are heavier than the surrounding water and sink to more nutrient-rich conditions at depth in the water column or at the sediment surface. These sugars are depleted by cellular respiration, and this loss of ballast eventually results in cells becoming less dense than water and floating again towards the surface. This alternation of sinking and floating can result in large fluctuations in surface blooms over the twenty-four-hour cycle. The accumulation of bloom-forming cyanobacteria at the surface gives rise to surface scums that then can be blown into bays and washed up onto beaches. These dense populations of colonies in the water column, and especially at the surface, can shade out bottom-dwelling water plants, as well as greatly reduce the amount of light for other phytoplankton species. The resultant ‘cyanobacterial dominance’ and loss of algal species diversity has negative implications for the aquatic food web […] This negative impact on the food web may be compounded by the final collapse of the bloom and its decomposition, resulting in a major drawdown of oxygen. […] Bloom-forming cyanobacteria are especially troublesome for the management of drinking water supplies. First, there is the overproduction of biomass, which results in a massive load of algal particles that can exceed the filtration capacity of a water treatment plant […]. Second, there is an impact on the taste of the water. […] The third and most serious impact of cyanobacteria is that some of their secondary compounds are highly toxic. […] phosphorus is the key nutrient limiting bloom development, and efforts to preserve and rehabilitate freshwaters should pay specific attention to controlling the input of phosphorus via point and nonpoint discharges to lakes.”

The viral shunt in marine foodwebs.
Proteobacteria. Alphaproteobacteria. Betaproteobacteria. Gammaproteobacteria.
Carbon cycle. Nitrogen cycle. AmmonificationAnammox. Comammox.
Phosphorus cycle.
Littoral zone. Limnetic zone. Profundal zone. Benthic zone. Benthos.
Phytoplankton. Diatom. Picoeukaryote. Flagellates. Cyanobacteria.
Trophic state (-index).
Amphipoda. Rotifer. Cladocera. Copepod. Daphnia.
Redfield ratio.
Extremophile. Halophile. Psychrophile. Acidophile.
Caspian Sea. Endorheic basin. Mono Lake.
Alpine lake.
Meromictic lake.
Subglacial lake. Lake Vostock.
Thermus aquaticus. Taq polymerase.
Lake Monoun.
Microcystin. Anatoxin-a.




February 2, 2018 Posted by | Biology, Books, Botany, Chemistry, Ecology, Engineering, Zoology | Leave a comment

Books 2018

This is a list of books I’ve read this year. As usual ‘f’ = fiction, ‘m’ = miscellaneous, ‘nf’ = non-fiction; the numbers in parentheses indicate my goodreads ratings of the books (from 1-5).

I’ll try to keep updating the post throughout the year.

i. Complexity: A Very Short Introduction (nf. Oxford University Press). Blog coverage here.

ii. Rivers: A Very Short Introduction (1, nf. Oxford University Press). Short goodreads review here. Blog coverage here and here.

iii. Something for the Pain: Compassion and Burnout in the ER (2, m. W. W. Norton & Company/Paul Austin).

iv. Mountains: A Very Short Introduction (1, nf. Oxford University Press). Short goodreads review here.

v. Water: A Very Short Introduction (4, nf. Oxford University Press). Goodreads review here.

vi. Assassin’s Quest (3, f). Robin Hobb. Goodreads review here.

vii. Oxford Handbook of Endocrinology and Diabetes (3rd edition) (5, nf. Oxford University Press). Goodreads review here. Blog coverage here, here, here, here, and here. I added this book to my list of favourite books on goodreads. Some of the specific chapters included are ‘book-equivalents’; this book is very long and takes a lot of work.

viii. Desolation Island (3, f). Patrick O’Brian.

ix. The Fortune of War (4, f). Patrick O’Brian.

x. Lakes: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here and here.

xi. The Surgeon’s Mate (4, f). Patrick O’Brian. Short goodreads review here.

xii. Domestication of Plants in the Old World: The Origin and Spread of Domesticated Plants in South-West Asia, Europe, and the Mediterranean Basin (5, nf. Oxford University Press). Goodreads review here. I added this book to my list of favourite books on goodreads.

xiii. The Ionian Mission (4, f). Patrick O’Brian.

xiv. Systems Biology: Functional Strategies of Living Organisms (4, nf. Springer). Blog coverage here, here, and here.

xv. Treason’s Harbour (4, f). Patrick O’Brian.

xvi. Peripheral Neuropathy – A New Insight into the Mechanism, Evaluation and Management of a Complex Disorder (3, nf. InTech). Blog coverage here and here.

xvii. The portable door (5, f). Tom Holt. Goodreads review here.

xviii. Prevention of Late-Life Depression: Current Clinical Challenges and Priorities (2, nf. Humana Press). Blog coverage here and here.

xix. In your dreams (4, f). Tom Holt.

xx. Earth, Air, Fire and Custard (3, f). Tom Holt. Short goodreads review here.

xxi. You Don’t Have to Be Evil to Work Here, But it Helps (3, f). Tom Holt.

xxii. The Ice Age: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here and here.

xxiii. The Better Mousetrap (4, f). Tom Holt.

xxiv. May Contain Traces of Magic (2, f). Tom Holt.

xxv. Expecting Someone Taller (4, f). Tom Holt.

xxvi. The Computer: A Very Short Introduction (2, nf. Oxford University Press). Short goodreads review here. Blog coverage here.

xxvii. Who’s Afraid of Beowulf? (5, f). Tom Holt.

xxviii. Flying Dutch (4, f). Tom Holt.

xxix. Ye Gods! (2, f). Tom Holt.

xxx. Marine Biology: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here and here.

xxxi. Here Comes The Sun (2, f). Tom Holt.

xxxii. Grailblazers (4, f). Tom Holt.

xxxiii. Oceans: A Very Short Introduction (2, nf. Oxford University Press). Very short goodreads review here.

xxxiv. Oxford Handbook of Medical Statistics (2, nf. Oxford University Press). Long, takes some work. Goodreads review here.

xxxv. Faust Among Equals (3, f). Tom Holt.

xxxvi. My Hero (3, f). Tom Holt. Short goodreads review here.

xxxvii. Odds and Gods (3, f). Tom Holt.


February 2, 2018 Posted by | Books, Personal | Leave a comment

Lakes (I)

“The aim of this book is to provide a condensed overview of scientific knowledge about lakes, their functioning as ecosystems that we are part of and depend upon, and their responses to environmental change. […] Each chapter briefly introduces concepts about the physical, chemical, and biological nature of lakes, with emphasis on how these aspects are connected, the relationships with human needs and impacts, and the implications of our changing global environment.”

I’m currently reading this book and I really like it so far. I have added some observations from the first half of the book and some coverage-related links below.

“High resolution satellites can readily detect lakes above 0.002 kilometres square (km2) in area; that’s equivalent to a circular waterbody some 50m across. Using this criterion, researchers estimate from satellite images that the world contains 117 million lakes, with a total surface area amounting to 5 million km2. […] continuous accumulation of materials on the lake floor, both from inflows and from the production of organic matter within the lake, means that lakes are ephemeral features of the landscape, and from the moment of their creation onwards, they begin to fill in and gradually disappear. The world’s deepest and most ancient freshwater ecosystem, Lake Baikal in Russia (Siberia), is a compelling example: it has a maximum depth of 1,642m, but its waters overlie a much deeper basin that over the twenty-five million years of its geological history has become filled with some 7,000m of sediments. Lakes are created in a great variety of ways: tectonic basins formed by movements in the Earth’s crust, the scouring and residual ice effects of glaciers, as well as fluvial, volcanic, riverine, meteorite impacts, and many other processes, including human construction of ponds and reservoirs. Tectonic basins may result from a single fault […] or from a series of intersecting fault lines. […] The oldest and deepest lakes in the world are generally of tectonic origin, and their persistence through time has allowed the evolution of endemic plants and animals; that is, species that are found only at those sites.”

“In terms of total numbers, most of the world’s lakes […] owe their origins to glaciers that during the last ice age gouged out basins in the rock and deepened river valleys. […] As the glaciers retreated, their terminal moraines (accumulations of gravel and sediments) created dams in the landscape, raising water levels or producing new lakes. […] During glacial retreat in many areas of the world, large blocks of glacial ice broke off and were left behind in the moraines. These subsequently melted out to produce basins that filled with water, called ‘kettle’ or ‘pothole’ lakes. Such waterbodies are well known across the plains of North America and Eurasia. […] The most violent of lake births are the result of volcanoes. The craters left behind after a volcanic eruption can fill with water to form small, often circular-shaped and acidic lakes. […] Much larger lakes are formed by the collapse of a magma chamber after eruption to produce caldera lakes. […] Craters formed by meteorite impacts also provide basins for lakes, and have proved to be of great scientific as well as human interest. […] There was a time when limnologists paid little attention to small lakes and ponds, but, this has changed with the realization that although such waterbodies are modest in size, they are extremely abundant throughout the world and make up a large total surface area. Furthermore, these smaller waterbodies often have high rates of chemical activity such as greenhouse gas production and nutrient cycling, and they are major habitats for diverse plants and animals”.

“For Forel, the science of lakes could be subdivided into different disciplines and subjects, all of which continue to occupy the attention of freshwater scientists today […]. First, the physical environment of a lake includes its geological origins and setting, the water balance and exchange of heat with the atmosphere, as well as the penetration of light, the changes in temperature with depth, and the waves, currents, and mixing processes that collectively determine the movement of water. Second, the chemical environment is important because lake waters contain a great variety of dissolved materials (‘solutes’) and particles that play essential roles in the functioning of the ecosystem. Third, the biological features of a lake include not only the individual species of plants, microbes, and animals, but also their organization into food webs, and the distribution and functioning of these communities across the bottom of the lake and in the overlying water.”

“In the simplest hydrological terms, lakes can be thought of as tanks of water in the landscape that are continuously topped up by their inflowing rivers, while spilling excess water via their outflow […]. Based on this model, we can pose the interesting question: how long does the average water molecule stay in the lake before leaving at the outflow? This value is referred to as the water residence time, and it can be simply calculated as the total volume of the lake divided by the water discharge at the outlet. This lake parameter is also referred to as the ‘flushing time’ (or ‘flushing rate’, if expressed as a proportion of the lake volume discharged per unit of time) because it provides an estimate of how fast mineral salts and pollutants can be flushed out of the lake basin. In general, lakes with a short flushing time are more resilient to the impacts of human activities in their catchments […] Each lake has its own particular combination of catchment size, volume, and climate, and this translates into a water residence time that varies enormously among lakes [from perhaps a month to more than a thousand years, US] […] A more accurate approach towards calculating the water residence time is to consider the question: if the lake were to be pumped dry, how long would it take to fill it up again? For most lakes, this will give a similar value to the outflow calculation, but for lakes where evaporation is a major part of the water balance, the residence time will be much shorter.”

“Each year, mineral and organic particles are deposited by wind on the lake surface and are washed in from the catchment, while organic matter is produced within the lake by aquatic plants and plankton. There is a continuous rain of this material downwards, ultimately accumulating as an annual layer of sediment on the lake floor. These lake sediments are storehouses of information about past changes in the surrounding catchment, and they provide a long-term memory of how the limnology of a lake has responded to those changes. The analysis of these natural archives is called ‘palaeolimnology’ (or ‘palaeoceanography’ for marine studies), and this branch of the aquatic sciences has yielded enormous insights into how lakes change through time, including the onset, effects, and abatement of pollution; changes in vegetation both within and outside the lake; and alterations in regional and global climate.”

“Sampling for palaeolimnological analysis is typically undertaken in the deepest waters to provide a more integrated and complete picture of the lake basin history. This is also usually the part of the lake where sediment accumulation has been greatest, and where the disrupting activities of bottom-dwelling animals (‘bioturbation’ of the sediments) may be reduced or absent. […] Some of the most informative microfossils to be found in lake sediments are diatoms, an algal group that has cell walls (‘frustules’) made of silica glass that resist decomposition. Each lake typically contains dozens to hundreds of different diatom species, each with its own characteristic set of environmental preferences […]. A widely adopted approach is to sample many lakes and establish a statistical relationship or ‘transfer function’ between diatom species composition (often by analysis of surface sediments) and a lake water variable such as temperature, pH, phosphorus, or dissolved organic carbon. This quantitative species–environment relationship can then be applied to the fossilized diatom species assemblage in each stratum of a sediment core from a lake in the same region, and in this way the physical and chemical fluctuations that the lake has experienced in the past can be reconstructed or ‘hindcast’ year-by-year. Other fossil indicators of past environmental change include algal pigments, DNA of algae and bacteria including toxic bloom species, and the remains of aquatic animals such as ostracods, cladocerans, and larval insects.”

“In lake and ocean studies, the penetration of sunlight into the water can be […] precisely measured with an underwater light meter (submersible radiometer), and such measurements always show that the decline with depth follows a sharp curve rather than a straight line […]. This is because the fate of sunlight streaming downwards in water is dictated by the probability of the photons being absorbed or deflected out of the light path; for example, a 50 per cent probability of photons being lost from the light beam by these processes per metre depth in a lake would result in sunlight values dropping from 100 per cent at the surface to 50 per cent at 1m, 25 per cent at 2m, 12.5 per cent at 3m, and so on. The resulting exponential curve means that for all but the clearest of lakes, there is only enough solar energy for plants, including photosynthetic cells in the plankton (phytoplankton), in the upper part of the water column. […] The depth limit for underwater photosynthesis or primary production is known as the ‘compensation depth‘. This is the depth at which carbon fixed by photosynthesis exactly balances the carbon lost by cellular respiration, so the overall production of new biomass (net primary production) is zero. This depth often corresponds to an underwater light level of 1 per cent of the sunlight just beneath the water surface […] The production of biomass by photosynthesis takes place at all depths above this level, and this zone is referred to as the ‘photic’ zone. […] biological processes in [the] ‘aphotic zone’ are mostly limited to feeding and decomposition. A Secchi disk measurement can be used as a rough guide to the extent of the photic zone: in general, the 1 per cent light level is about twice the Secchi depth.”

“[W]ater colour is now used in […] many powerful ways to track changes in water quality and other properties of lakes, rivers, estuaries, and the ocean. […] Lakes have different colours, hues, and brightness levels as a result of the materials that are dissolved and suspended within them. The purest of lakes are deep blue because the water molecules themselves absorb light in the green and, to a greater extent, red end of the spectrum; they scatter the remaining blue photons in all directions, mostly downwards but also back towards our eyes. […] Algae in the water typically cause it to be green and turbid because their suspended cells and colonies contain chlorophyll and other light-capturing molecules that absorb strongly in the blue and red wavebands, but not green. However there are some notable exceptions. Noxious algal blooms dominated by cyanobacteria are blue-green (cyan) in colour caused by their blue-coloured protein phycocyanin, in addition to chlorophyll.”

“[A]t the largest dimension, at the scale of the entire lake, there has to be a net flow from the inflowing rivers to the outflow, and […] from this landscape perspective, lakes might be thought of as enlarged rivers. Of course, this riverine flow is constantly disrupted by wind-induced movements of the water. When the wind blows across the surface, it drags the surface water with it to generate a downwind flow, and this has to be balanced by a return movement of water at depth. […] In large lakes, the rotation of the Earth has plenty of time to exert its weak effect as the water moves from one side of the lake to the other. As a result, the surface water no longer flows in a straight line, but rather is directed into two or more circular patterns or gyres that can move nearshore water masses rapidly into the centre of the lake and vice-versa. Gyres can therefore be of great consequence […] Unrelated to the Coriolis Effect, the interaction between wind-induced currents and the shoreline can also cause water to flow in circular, individual gyres, even in smaller lakes. […] At a much smaller scale, the blowing of wind across a lake can give rise to downward spiral motions in the water, called ‘Langmuir cells‘. […] These circulation features are commonly observed in lakes, where the spirals progressing in the general direction of the wind concentrate foam (on days of white-cap waves) or glossy, oily materials (on less windy days) into regularly spaced lines that are parallel to the direction of the wind. […] Density currents must also be included in this brief discussion of water movement […] Cold river water entering a warm lake will be denser than its surroundings and therefore sinks to the buttom, where it may continue to flow for considerable distances. […] Density currents contribute greatly to inshore-offshore exchanges of water, with potential effects on primary productivity, depp-water oxygenation, and the dispersion of pollutants.”


Drainage basin.
Lake Geneva. Lake Malawi. Lake Tanganyika. Lake Victoria. Lake Biwa. Lake Titicaca.
English Lake District.
Proglacial lakeLake Agassiz. Lake Ojibway.
Lake Taupo.
Manicouagan Reservoir.
Subglacial lake.
Thermokarst (-lake).
Bathymetry. Bathymetric chart. Hypsographic curve.
Várzea forest.
Lake Chad.
Colored dissolved organic matter.
H2O Temperature-density relationship. Thermocline. Epilimnion. Hypolimnion. Monomictic lake. Dimictic lake. Lake stratification.
Capillary wave. Gravity wave. Seiche. Kelvin wave. Poincaré wave.
Benthic boundary layer.
Kelvin–Helmholtz instability.


January 22, 2018 Posted by | Biology, Books, Botany, Chemistry, Geology, Paleontology, Physics | Leave a comment