Econstudentlog

The Internet of Things

 

Some links to stuff he talks about in the lecture:

The Internet of Things: making the most of the Second Digital Revolution – A report by the UK Government Chief Scientific Adviser.
South–North Water Transfer Project.
FDA approves first smart pill that tracks drug regimen compliance from the inside.
The Internet of Things (IoT)* units installed base by category from 2014 to 2020.
Share of the IoT market by sub-sector worldwide in 2017.
San Diego to Cover Half the City with Intelligent Streetlights.
IPv4 and IPv6 (specifically, he talks a little about the IPv4 address space problem).
General Data Protection Regulation (GDPR).
Shodan (website).
Mirai botnet.
Gait analysis.
Website reveals 73,000 unprotected security cameras with default passwords. (This was just an example link – it’s unclear if the site he used to illustrate his point in that part of the lecture was actually Insecam, but he does talk about the widespread use of default passwords and related security implications during the lecture).
Strava’s fitness heatmaps are a ‘potential catastrophe’.
‘Secure by Design’ (a very recently published proposed UK IoT code of practice).

Advertisements

March 26, 2018 Posted by | Computer science, Engineering, Lectures | Leave a comment

Quotes

i. “La vérité ne se possède pas, elle se cherche.” (‘You cannot possess the truth, you can only search for it.’ – Albert Jacquard)

ii. “Some physicist might believe that ultimately, we will be able to explain everything. To me, that is utterly stupid […] It seems to me that, if you accept evolution, you can still not expect your dog to get up and start talking German. And that’s because your dog is not genetically programmed to do that. We are human animals, and we are equally bound. There are whole realms of discourse out there that we cannot reach, by definition. There are always going to be limits beyond which we cannot go. Knowing that they are there, you can always hope to move a little closer – but that’s all.” (James M. Buchanan)

iii. “Physics is a wrong tool to describe living systems.” (Donald A. Glaser)

iv. “In the seventeenth century Cartesians refused to accept Newton’s attraction because they could not accept a force that was not transmitted by a medium. Even now many physicists have not yet learned that they should adjust their ideas to the observed reality rather than the other way round.” (Nico van Kampen)

v. “…the human brain is itself a part of nature, fanned into existence by billions of years of sunshine acting on the molecules of the Earth. It is not perfectible in the immediate future, even if biologists should wish to alter the brain […]. What men make of the universe at large is a product of what they can see of it and of their own human nature.” (Nigel Calder)

vi. “If you torture the data enough, nature will always confess.” (Ronald Coase)

vii. “If economists wished to study the horse, they wouldn’t go and look at horses. They’d sit in their studies and say to themselves, “what would I do if I were a horse?”” (-ll-)

viii. “Nothing is as simple as we hope it will be.” (Jim Horning)

ix. “There’s an old saying in politics: anyone dumb enough to run for the job probably is too stupid to have it.” (Ralph Klein)

x. “I never felt the need to do what everyone else did. And I wasn’t troubled by the fact that other people were doing other things.” (Saul Leiter)

xi. “Think wrongly, if you please, but in all cases think for yourself.” (Doris Lessing)

xii. “All political movements are like this — we are in the right, everyone else is in the wrong. The people on our own side who disagree with us are heretics, and they start becoming enemies. With it comes an absolute conviction of your own moral superiority.” (-ll-)

xiii. “An ideological movement is a collection of people many of whom could hardly bake a cake, fix a car, sustain a friendship or a marriage, or even do a quadratic equation, yet they believe they know how to rule the world.” (Kenneth Minogue)

xiv. “The natural order of organisms is a divergent inclusive hierarchy and that hierarchy is recognized by taxic homology.” (Alec Panchen)

xv. “Don Kayman was too good a scientist to confuse his hopes with observations. He would report what he found. But he knew what he wanted to find.” (Frederik Pohl)

xvi. “A barbarian is not aware that he is a barbarian.” (Jack Vance)

xvii. “I do not care to listen; obloquy injures my self-esteem and I am skeptical of praise.” (-ll-)

xviii. “People can be deceived by appeals intended to destroy democracy in the name of democracy.” (Robert A. Dahl)

xix. “If we gather more and more data and establish more and more associations, […] we will not finally find that we know something. We will simply end up having more and more data and larger sets of correlations.” (-ll-)

xx. “Thoughts convince thinkers; for this reason, thoughts convince seldom.” (Karlheinz Deschner)

March 24, 2018 Posted by | Quotes/aphorisms | Leave a comment

The Computer

Below some quotes and links related to the book‘s coverage:

“At the heart of every computer is one or more hardware units known as processors. A processor controls what the computer does. For example, it will process what you type in on your computer’s keyboard, display results on its screen, fetch web pages from the Internet, and carry out calculations such as adding two numbers together. It does this by ‘executing’ a computer program that details what the computer should do […] Data and programs are stored in two storage areas. The first is known as main memory and has the property that whatever is stored there can be retrieved very quickly. Main memory is used for transient data – for example, the result of a calculation which is an intermediate result in a much bigger calculation – and is also used to store computer programs while they are being executed. Data in main memory is transient – it will disappear when the computer is switched off. Hard disk memory, also known as file storage or backing storage, contains data that are required over a period of time. Typical entities that are stored in this memory include files of numerical data, word-processed documents, and spreadsheet tables. Computer programs are also stored here while they are not being executed. […] There are a number of differences between main memory and hard disk memory. The first is the retrieval time. With main memory, an item of data can be retrieved by the processor in fractions of microseconds. With file-based memory, the retrieval time is much greater: of the order of milliseconds. The reason for this is that main memory is silicon-based […] hard disk memory is usually mechanical and is stored on the metallic surface of a disk, with a mechanical arm retrieving the data. […] main memory is more expensive than file-based memory”.

The Internet is a network of computers – strictly, it is a network that joins up a number of networks. It carries out a number of functions. First, it transfers data from one computer to another computer […] The second function of the Internet is to enforce reliability. That is, to ensure that when errors occur then some form of recovery process happens; for example, if an intermediate computer fails then the software of the Internet will discover this and resend any malfunctioning data via other computers. A major component of the Internet is the World Wide Web […] The web […] uses the data-transmission facilities of the Internet in a specific way: to store and distribute web pages. The web consists of a number of computers known as web servers and a very large number of computers known as clients (your home PC is a client). Web servers are usually computers that are more powerful than the PCs that are normally found in homes or those used as office computers. They will be maintained by some enterprise and will contain individual web pages relevant to that enterprise; for example, an online book store such as Amazon will maintain web pages for each item it sells. The program that allows users to access the web is known as a browser. […] A part of the Internet known as the Domain Name System (usually referred to as DNS) will figure out where the page is held and route the request to the web server holding the page. The web server will then send the page back to your browser which will then display it on your computer. Whenever you want another page you would normally click on a link displayed on that page and the process is repeated. Conceptually, what happens is simple. However, it hides a huge amount of detail involving the web discovering where pages are stored, the pages being located, their being sent, the browser reading the pages and interpreting how they should be displayed, and eventually the browser displaying the pages. […] without one particular hardware advance the Internet would be a shadow of itself: this is broadband. This technology has provided communication speeds that we could not have dreamed of 15 years ago. […] Typical broadband speeds range from one megabit per second to 24 megabits per second, the lower rate being about 20 times faster than dial-up rates.”

“A major idea I hope to convey […] is that regarding the computer as just the box that sits on your desk, or as a chunk of silicon that is embedded within some device such as a microwave, is only a partial view. The Internet – or rather broadband access to the Internet – has created a gigantic computer that has unlimited access to both computer power and storage to the point where even applications that we all thought would never migrate from the personal computer are doing just that. […] the Internet functions as a series of computers – or more accurately computer processors – carrying out some task […]. Conceptually, there is little difference between these computers and [a] supercomputer, the only difference is in the details: for a supercomputer the communication between processors is via some internal electronic circuit, while for a collection of computers working together on the Internet the communication is via external circuits used for that network.”

“A computer will consist of a number of electronic circuits. The most important is the processor: this carries out the instructions that are contained in a computer program. […] There are a number of individual circuit elements that make up the computer. Thousands of these elements are combined together to construct the computer processor and other circuits. One basic element is known as an And gate […]. This is an electrical circuit that has two binary inputs A and B and a single binary output X. The output will be one if both the inputs are one and zero otherwise. […] the And gate is only one example – when some action is required, for example adding two numbers together, [the different circuits] interact with each other to carry out that action. In the case of addition, the two binary numbers are processed bit by bit to carry out the addition. […] Whatever actions are taken by a program […] the cycle is the same; an instruction is read into the processor, the processor decodes the instruction, acts on it, and then brings in the next instruction. So, at the heart of a computer is a series of circuits and storage elements that fetch and execute instructions and store data and programs.”

“In essence, a hard disk unit consists of one or more circular metallic disks which can be magnetized. Each disk has a very large number of magnetizable areas which can either represent zero or one depending on the magnetization. The disks are rotated at speed. The unit also contains an arm or a number of arms that can move laterally and which can sense the magnetic patterns on the disk. […] When a processor requires some data that is stored on a hard disk […] then it issues an instruction to find the file. The operating system – the software that controls the computer – will know where the file starts and ends and will send a message to the hard disk to read the data. The arm will move laterally until it is over the start position of the file and when the revolving disk passes under the arm the magnetic pattern that represents the data held in the file is read by it. Accessing data on a hard disk is a mechanical process and usually takes a small number of milliseconds to carry out. Compared with the electronic speeds of the computer itself – normally measured in fractions of a microsecond – this is incredibly slow. Because disk access is slow, systems designers try to minimize the amount of access required to files. One technique that has been particularly effective is known as caching. It is, for example, used in web servers. Such servers store pages that are sent to browsers for display. […] Caching involves placing the frequently accessed pages in some fast storage medium such as flash memory and keeping the remainder on a hard disk.”

“The first computers had a single hardware processor that executed individual instructions. It was not too long before researchers started thinking about computers that had more than one processor. The simple theory here was that if a computer had n processors then it would be n times faster. […] it is worth debunking this notion. If you look at many classes of problems […], you see that a strictly linear increase in performance is not achieved. If a problem that is solved by a single computer is solved in 20 minutes, then you will find a dual processor computer solving it in perhaps 11 minutes. A 3-processor computer may solve it in 9 minutes, and a 4-processor computer in 8 minutes. There is a law of diminishing returns; often, there comes a point when adding a processor slows down the computation. What happens is that each processor needs to communicate with the others, for example passing on the result of a computation; this communicational overhead becomes bigger and bigger as you add processors to the point when it dominates the amount of useful work that is done. The sort of problems where they are effective is where a problem can be split up into sub-problems that can be solved almost independently by each processor with little communication.”

Symmetric encryption methods are very efficient and can be used to scramble large files or long messages being sent from one computer to another. Unfortunately, symmetric techniques suffer from a major problem: if there are a number of individuals involved in a data transfer or in reading a file, each has to know the same key. This makes it a security nightmare. […] public key cryptography removed a major problem associated with symmetric cryptography: that of a large number of keys in existence some of which may be stored in an insecure way. However, a major problem with asymmetric cryptography is the fact that it is very inefficient (about 10,000 times slower than symmetric cryptography): while it can be used for short messages such as email texts, it is far too inefficient for sending gigabytes of data. However, […] when it is combined with symmetric cryptography, asymmetric cryptography provides very strong security. […] One very popular security scheme is known as the Secure Sockets Layer – normally shortened to SSL. It is based on the concept of a one-time pad. […] SSL uses public key cryptography to communicate the randomly generated key between the sender and receiver of a message. This key is only used once for the data interchange that occurs and, hence, is an electronic analogue of a one-time pad. When each of the parties to the interchange has received the key, they encrypt and decrypt the data employing symmetric cryptography, with the generated key carrying out these processes. […] There is an impression amongst the public that the main threats to security and to privacy arise from technological attack. However, the threat from more mundane sources is equally high. Data thefts, damage to software and hardware, and unauthorized access to computer systems can occur in a variety of non-technical ways: by someone finding computer printouts in a waste bin; by a window cleaner using a mobile phone camera to take a picture of a display containing sensitive information; by an office cleaner stealing documents from a desk; by a visitor to a company noting down a password written on a white board; by a disgruntled employee putting a hammer through the main server and the backup server of a company; or by someone dropping an unencrypted memory stick in the street.”

“The basic architecture of the computer has remained unchanged for six decades since IBM developed the first mainframe computers. It consists of a processor that reads software instructions one by one and executes them. Each instruction will result in data being processed, for example by being added together; and data being stored in the main memory of the computer or being stored on some file-storage medium; or being sent to the Internet or to another computer. This is what is known as the von Neumann architecture; it was named after John von Neumann […]. His key idea, which still holds sway today, is that in a computer the data and the program are both stored in the computer’s memory in the same address space. There have been few challenges to the von Neumann architecture.”

[A] ‘neural network‘ […] consists of an input layer that can sense various signals from some environment […]. In the middle (hidden layer), there are a large number of processing elements (neurones) which are arranged into sub-layers. Finally, there is an output layer which provides a result […]. It is in the middle layer that the work is done in a neural computer. What happens is that the network is trained by giving it examples of the trend or item that is to be recognized. What the training does is to strengthen or weaken the connections between the processing elements in the middle layer until, when combined, they produce a strong signal when a new case is presented to them that matches the previously trained examples and a weak signal when an item that does not match the examples is encountered. Neural networks have been implemented in hardware, but most of the implementations have been via software where the middle layer has been implemented in chunks of code that carry out the learning process. […] although the initial impetus was to use ideas in neurobiology to develop neural architectures based on a consideration of processes in the brain, there is little resemblance between the internal data and software now used in commercial implementations and the human brain.”

Links:

Computer.
Byte. Bit.
Moore’s law.
Computer program.
Programming language. High-level programming language. Low-level programming language.
Zombie (computer science).
Therac-25.
Cloud computing.
Instructions per second.
ASCII.
Fetch-execute cycle.
Grace Hopper. Software Bug.
Transistor. Integrated circuit. Very-large-scale integration. Wafer (electronics). Photomask.
Read-only memory (ROM). Read-write memory (RWM). Bus (computing). Address bus. Programmable read-only memory (PROM). Erasable programmable read-only memory (EPROM). Electrically erasable programmable read-only memory (EEPROM). Flash memory. Dynamic random-access memory (DRAM). Static random-access memory (static RAM/SRAM).
Hard disc.
Miniaturization.
Wireless communication.
Radio-frequency identification (RFID).
Metadata.
NP-hardness. Set partition problem. Bin packing problem.
Routing.
Cray X-MP. Beowulf cluster.
Vector processor.
Folding@home.
Denial-of-service attack. Melissa (computer virus). Malware. Firewall (computing). Logic bomb. Fork bomb/rabbit virus. Cryptography. Caesar cipher. Social engineering (information security).
Application programming interface.
Data mining. Machine translation. Machine learning.
Functional programming.
Quantum computing.

March 19, 2018 Posted by | Books, Computer science, Cryptography, Engineering | Leave a comment

Marine Biology (II)

Below some observations and links related to the second half of the book’s coverage:

[C]oral reefs occupy a very small proportion of the planet’s surface – about 284,000 square kilometres – roughly equivalent to the size of Italy [yet they] are home to an incredibly diversity of marine organisms – about a quarter of all marine species […]. Coral reef systems provide food for hundreds of millions of people, with about 10 per cent of all fish consumed globally caught on coral reefs. […] Reef-building corals thrive best at sea temperatures above about 23°C and few exist where sea temperatures fall below 18°C for significant periods of time. Thus coral reefs are absent at tropical latitudes where upwelling of cold seawater occurs, such as the west coasts of South America and Africa. […] they are generally restricted to areas of clear water less than about 50 metres deep. Reef-building corals are very intolerant of any freshening of seawater […] and so do not occur in areas exposed to intermittent influxes of freshwater, such as near the mouths of rivers, or in areas where there are high amounts of rainfall run-off. This is why coral reefs are absent along much of the tropical Atlantic coast of South America, which is exposed to freshwater discharge from the Amazon and Orinoco Rivers. Finally, reef-building corals flourish best in areas with moderate to high wave action, which keeps the seawater well aerated […]. Spectacular and productive coral reef systems have developed in those parts of the Global Ocean where this special combination of physical conditions converges […] Each colony consists of thousands of individual animals called polyps […] all reef-building corals have entered into an intimate relationship with plant cells. The tissues lining the inside of the tentacles and stomach cavity of the polyps are packed with photosynthetic cells called zooxanthellae, which are photosynthetic dinoflagellates […] Depending on the species, corals receive anything from about 50 per cent to 95 per cent of their food from their zooxanthellae. […] Healthy coral reefs are very productive marine systems. This is in stark contrast to the nutrient-poor and unproductive tropical waters adjacent to reefs. Coral reefs are, in general, roughly one hundred times more productive than the surrounding environment”.

“Overfishing constitutes a significant threat to coral reefs at this time. About an eighth of the world’s population – roughly 875 million people – live within 100 kilometres of a coral reef. Most of the people live in developing countries and island nations and depend greatly on fish obtained from coral reefs as a food source. […] Some of the fishing practices are very harmful. Once the large fish are removed from a coral reef, it becomes increasingly more difficult to make a living harvesting the more elusive and lower-value smaller fish that remain. Fishers thus resort to more destructive techniques such as dynamiting parts of the reef and scooping up the dead and stunned fish that float to the surface. People capturing fish for the tropical aquarium trade will often poison parts of the reef with sodium cyanide which paralyses the fish, making them easier to catch. An unfortunate side effect of this practice is that the poison kills corals. […] Coral reefs have only been seriously studied since the 1970s, which in most cases was well after human impacts had commenced. This makes it difficult to define what might actually constitute a ‘natural’ and healthy coral reef system, as would have existed prior to extensive human impacts.”

“Mangrove is a collective term applied to a diverse group of trees and scrubs that colonize protected muddy intertidal areas in tropical and subtropical regions, creating mangrove forests […] Mangroves are of great importance from a human perspective. The sheltered waters of a mangrove forest provide important nursery areas for juvenile fish, crabs, and shrimp. Many commercial fisheries depend on the existence of healthy mangrove forests, including blue crab, shrimp, spiny lobster, and mullet fisheries. Mangrove forests also stabilize the foreshore and protect the adjacent land from erosion, particularly from the effects of large storms and tsunamis. They also act as biological filters by removing excess nutrients and trapping sediment from land run-off before it enters the coastal environment, thereby protecting other habitats such as seagrass meadows and coral reefs. […] [However] mangrove forests are disappearing rapidly. In a twenty-year period between 1980 and 2000 the area of mangrove forest globally declined from around 20 million hectares to below 15 million hectares. In some specific regions the rate of mangrove loss is truly alarming. For example, Puerto Rico lost about 89 per cent of its mangrove forests between 1930 and 1985, while the southern part of India lost about 96 per cent of its mangroves between 1911 and 1989.”

“[A]bout 80 per cent of the entire volume of the Global Ocean, or roughly one billion cubic kilometres, consists of seawater with depths greater than 1,000 metres […] The deep ocean is a permanently dark environment devoid of sunlight, the last remnants of which cannot penetrate much beyond 200 metres in most parts of the Global Ocean, and no further than 800 metres or so in even the clearest oceanic waters. The only light present in the deep ocean is of biological origin […] Except in a few very isolated places, the deep ocean is a permanently cold environment, with sea temperatures ranging from about 2° to 4°C. […] Since there is no sunlight, there is no plant life, and thus no primary production of organic matter by photosynthesis. The base of the food chain in the deep ocean consists mostly of a ‘rain’ of small particles of organic material sinking down through the water column from the sunlit surface waters of the ocean. This reasonably constant rain of organic material is supplemented by the bodies of large fish and marine mammals that sink more rapidly to the bottom following death, and which provide sporadic feasts for deep-ocean bottom dwellers. […] Since food is a scarce commodity for deep-ocean fish, full advantage must be taken of every meal encountered. This has resulted in a number of interesting adaptations. Compared to fish in the shallow ocean, many deep-ocean fish have very large mouths capable of opening very wide, and often equipped with numerous long, sharp, inward-pointing teeth. […] These fish can capture and swallow whole prey larger than themselves so as not to pass up a rare meal simply because of its size. These fish also have greatly extensible stomachs to accommodate such meals.”

“In the pelagic environment of the deep ocean, animals must be able to keep themselves within an appropriate depth range without using up energy in their food-poor habitat. This is often achieved by reducing the overall density of the animal to that of seawater so that it is neutrally buoyant. Thus the tissues and bones of deep-sea fish are often rather soft and watery. […] There is evidence that deep-ocean organisms have developed biochemical adaptations to maintain the functionality of their cell membranes under pressure, including adjusting the kinds of lipid molecules present in membranes to retain membrane fluidity under high pressure. High pressures also affect protein molecules, often preventing them from folding up into the correct shapes for them to function as efficient metabolic enzymes. There is evidence that deep-ocean animals have evolved pressure-resistant variants of common enzymes that mitigate this problem. […] The pattern of species diversity of the deep-ocean benthos appears to differ from that of other marine communities, which are typically dominated by a small number of abundant and highly visible species which overshadow the presence of a large number of rarer and less obvious species which are also present. In the deep-ocean benthic community, in contrast, no one group of species tends to dominate, and the community consists of a high number of different species all occurring in low abundance. […] In general, species diversity increases with the size of a habitat – the larger the area of a habitat, the more species that have developed ways to successfully live in that habitat. Since the deep-ocean bottom is the largest single habitat on the planet, it follows that species diversity would be expected to be high.”

Seamounts represent a special kind of biological hotspot in the deep ocean. […] In contrast to the surrounding flat, soft-bottomed abyssal plains, seamounts provide a complex rocky platform that supports an abundance of organisms that are distinct from the surrounding deep-ocean benthos. […] Seamounts support a great diversity of fish species […] This [has] triggered the creation of new deep-ocean fisheries focused on seamounts. […] [However these species are generally] very slow-growing and long-lived and mature at a late age, and thus have a low reproductive potential. […] Seamount fisheries have often been described as mining operations rather than sustainable fisheries. They typically collapse within a few years of the start of fishing and the trawlers then move on to other unexplored seamounts to maintain the fishery. The recovery of localized fisheries will inevitably be very slow, if achievable at all, because of the low reproductive potential of these deep-ocean fish species. […] Comparisons of ‘fished’ and ‘unfished’ seamounts have clearly shown the extent of habitat damage and loss of species diversity brought about by trawl fishing, with the dense coral habitats reduced to rubble over much of the area investigated. […] Unfortunately, most seamounts exist in areas beyond national jurisdiction, which makes it very difficult to regulate fishing activities on them, although some efforts are underway to establish international treaties to better manage and protect seamount ecosystems.”

“Hydrothermal vents are unstable and ephemeral features of the deep ocean. […] The lifespan of a typical vent is likely in the order of tens of years. Thus the rich communities surrounding vents have a very limited lifespan. Since many vent animals can live only near vents, and the distance between vent systems can be hundreds to thousands of kilometres, it is a puzzle as to how vent animals escape a dying vent and colonize other distant vents or newly created vents. […] Hydrothermal vents are [however] not the only source of chemical-laden fluids supporting unique chemosynthetic-based communities in the deep ocean. Hydrogen sulphide and methane also ooze from the ocean buttom at some locations at temperatures similar to the surrounding seawater. These so-called ‘cold seeps‘ are often found along continental margins […] The communities associated with cold seeps are similar to hydrothermal vent communities […] Cold seeps appear to be more permanent sources of fluid compared to the ephemeral nature of hot water vents.”

“Seepage of crude oil into the marine environment occurs naturally from oil-containing geological formations below the seabed. It is estimated that around 600,000 tonnes of crude oil seeps into the marine environment each year, which represents almost half of all the crude oil entering the oceans. […] The human activities associated with exploring for and producing oil result in the release on average of an estimated 38,000 tonnes of crude oil into the oceans each year, which is about 6 per cent of the total anthropogenic input of oil into the oceans worldwide. Although small in comparison to natural seepage, crude oil pollution from this source can cause serious damage to coastal ecosystems because it is released near the coast and sometimes in very large, concentrated amounts. […] The transport of oil and oil products around the globe in tankers results in the release of about 150,000 tonnes of oil worldwide each year on average, or about 22 per cent of the total anthropogenic input. […] About 480,000 tonnes of oil make their way into the marine environment each year worldwide from leakage associated with the consumption of oil-derived products in cars and trucks, and to a lesser extent in boats. Oil lost from the operation of cars and trucks collects on paved urban areas from where it is washed off into streams and rivers, and from there into the oceans. Surprisingly, this represents the most significant source of human-derived oil pollution into the marine environment – about 72 per cent of the total. Because it is a very diffuse source of pollution, it is the most difficult to control.”

“Today it has been estimated that virtually all of the marine food resources in the Mediterranean sea have been reduced to less than 50 per cent of their original abundance […] The greatest impact has been on the larger predatory fish, which were the first to be targeted by fishers. […] It is estimated that, collectively, the European fish stocks of today are just one-tenth of their size in 1900. […] In 1950 the total global catch of marine seafood was just less than twenty million tonnes fresh weight. This increased steadily and rapidly until by the late 1980s more than eighty million tonnes were being taken each year […] Starting in the early 1990s, however, yields began to show signs of levelling off. […] By far the most heavily exploited marine fishery in the world is the Peruvian anchoveta (Engraulis ringens) fishery, which can account for 10 per cent or more of the global marine catch of seafood in any particular year. […] The anchoveta is a very oily fish, which makes it less desirable for direct consumption by humans. However, the high oil content makes it ideal for the production of fish meal and fish oil […] the demand for fish meal and fish oil is huge and about a third of the entire global catch of fish is converted into these products rather than consumed directly by humans. Feeding so much fish protein to livestock comes with a considerable loss of potential food energy (around 25 per cent) compared to if it was eaten directly by humans. This could be viewed as a potential waste of available energy for a rapidly growing human population […] around 90 per cent of the fish used to produce fish meal and oil is presently unpalatable to most people and thus unmarketable in large quantities as a human food”.

“On heavily fished areas of the continental shelves, the same parts of the sea floor can be repeatedly trawled many times per year. Such intensive bottom trawling causes great cumulative damage to seabed habitats. The trawls scrape and pulverize rich and complex bottom habitats built up over centuries by living organisms such as tube worms, cold-water corals, and oysters. These habitats are eventually reduced to uniform stretches of rubble and sand. For all intents and purposes these areas are permanently altered and become occupied by a much changed and much less rich community adapted to frequent disturbance.”

“The eighty million tonnes or so of marine seafood caught each year globally equates to about eleven kilograms of wild-caught marine seafood per person on the planet. […] What is perfectly clear […] on the basis of theory backed up by real data on marine fish catches, is that marine fisheries are now fully exploited and that there is little if any headroom for increasing the amount of wild-caught fish humans can extract from the oceans to feed a burgeoning human population. […] This conclusion is solidly supported by the increasingly precarious state of global marine fishery resources. The most recent information from the Food and Agriculture Organization of the United Nations (The State of World Fisheries and Aquaculture 2010) shows that over half (53 per cent of all fish stocks are fully exploited – their current catches are at or close to their maximum sustainable levels of production and there is no scope for further expansion. Another 32 per cent are overexploited and in decline. Of the remaining 15 per cent of stocks, 12 per cent are considered moderately exploited and only 3 per cent underexploited. […] in the mid 1970s 40 per cent of all fish stocks were in [the moderately exploited or unexploited] category as opposed to around 15 per cent now. […] the real question is not so much whether we can get more fish from the sea but whether we can sustain the amount of fish we are harvesting at present”.

Links:

Scleractinia.
Atoll. Fringing reef. Barrier reef.
Corallivore.
Broadcast spawning.
Acanthaster planci.
Coral bleaching. Ocean acidification.
Avicennia germinans. Pneumatophores. Lenticel.
Photophore. Lanternfish. Anglerfish. Black swallower.
Deep scattering layer. Taylor column.
Hydrothermal vent. Black smokers and white smokers. Chemosynthesis. Siboglinidae.
Intertidal zone. Tides. Tidal range.
Barnacle. Mussel.
Clupeidae. Gadidae. Scombridae.

March 16, 2018 Posted by | Biology, Books, Chemistry, Ecology, Evolutionary biology, Geology | Leave a comment

Safety-Critical Systems

Some related links to topics covered in the lecture:

Safety-critical system.
Safety engineering.
Fault tree analysis.
Failure mode and effects analysis.
Fail-safe.
Value of a statistical life.
ALARP principle.
Hazards and Risk (HSA).
Software system safety.
Aleatoric and epistemic uncertainty.
N-version programming.
An experimental evaluation of the assumption of independence in multiversion programming (Knight & Leveson).
Safety integrity level.
Software for Dependable Systems – Sufficient Evidence? (consensus study report).

March 15, 2018 Posted by | Computer science, Economics, Engineering, Lectures, Statistics | Leave a comment

Words

Almost all the words included in this post are words which I encountered while reading the books The Mauritius Command, Desolation Island and You Don’t Have to Be Evil to Work Here, But it Helps.

Aleatory. Tenesmus. Celerity. Pelisse. Collop. Clem. Aviso. Crapulous. Farinaceous. Parturient. Tormina. Scend. Fascine. Distich. Appetency/appetence. Calipash. Tergiversation. Polypody. Prodigious. Teredo.

Rapacity. Cappabar. Chronometer. Figgy-dowdy. Chamade. Hauteur. Futtock. Obnubilate. Offing. Cleat. Trephine. Promulgate. Hieratic. Cockle. Froward. Aponeurosis. lixiviate. Cupellation. Plaice. Sharper.

Morosity. MephiticGlaucous. Libidinous. Grist. Tilbury. Surplice. Megrim. Cumbrous. Pule. Pintle. Fifer. Roadstead. Quadrumane. Peacoat. Burgher. Cuneate. Tundish. Bung. Fother.

Dégagé. Esculent. Genuflect. Lictor. Drogue. Oakum. Spume. Gudgeon. Firk. Mezzanine. Faff. Manky. Titchy. Sprocket. Conveyancing. Apportionment. Plonker. Flammulated. Cataract. Demersal.

March 15, 2018 Posted by | Books, Language | Leave a comment

Marine Biology (I)

This book was ‘okay’.

Some quotes and links related to the first half of the book below.

Quotes:

“The Global Ocean has come to be divided into five regional oceans – the Pacific, Atlantic, Indian, Arctic, and Southern Oceans […] These oceans are large, seawater-filled basins that share characteristic structural features […] The edge of each basin consists of a shallow, gently sloping extension of the adjacent continental land mass and is term the continental shelf or continental margin. Continental shelves typically extend off-shore to depths of a couple of hundred metres and vary from several kilometres to hundreds of kilometres in width. […] At the outer edge of the continental shelf, the seafloor drops off abruptly and steeply to form the continental slope, which extends down to depths of 2–3 kilometres. The continental slope then flattens out and gives way to a vast expanse of flat, soft, ocean bottom — the abyssal plain — which extends over depths of about 3–5 kilometres and accounts for about 76 per cent of the Global Ocean floor. The abyssal plains are transected by extensive mid-ocean ridges—underwater mountain chains […]. Mid-ocean ridges form a continuous chain of mountains that extend linearly for 65,000 kilometres across the floor of the Global Ocean basins […]. In some places along the edges of the abyssal plains the ocean bottom is cut by narrow, oceanic trenches or canyons which plunge to extraordinary depths — 3–4 kilometres below the surrounding seafloor — and are thousands of kilometres long but only tens of kilometres wide. […] Seamounts are another distinctive and dramatic feature of ocean basins. Seamounts are typically extinct volcanoes that rise 1,000 or more metres above the surrounding ocean but do not reach the surface of the ocean. […] Seamounts generally occur in chains or clusters in association with mid-ocean ridges […] The Global Ocean contains an estimated 100,000 or so seamounts that rise more than 1,000 metres above the surrounding deep-ocean floor. […] on a planetary scale, the surface of the Global Ocean is moving in a series of enormous, roughly circular, wind-driven current systems, or gyres […] These gyres transport enormous volumes of water and heat energy from one part of an ocean basin to another

“We now know that the oceans are literally teeming with life. Viruses […] are astoundingly abundant – there are around ten million viruses per millilitre of seawater. Bacteria and other microorganisms occur at concentrations of around 1 million per millilitre”

“The water in the oceans is in the form of seawater, a dilute brew of dissolved ions, or salts […] Chloride and sodium ions are the predominant salts in seawater, along with smaller amounts of other ions such as sulphate, magnesium, calcium, and potassium […] The total amount of dissolved salts in seawater is termed its salinity. Seawater typically has a salinity of roughly 35 – equivalent to about 35 grams of salts in one kilogram of seawater. […] Most marine organisms are exposed to seawater that, compared to the temperature extremes characteristic of terrestrial environments, ranges within a reasonably moderate range. Surface waters in tropical parts of ocean basins are consistently warm throughout the year, ranging from about 20–27°C […]. On the other hand, surface seawater in polar parts of ocean basins can get as cold as −1.9°C. Sea temperatures typically decrease with depth, but not in a uniform fashion. A distinct zone of rapid temperature transition is often present that separates warm seawater at the surface from cooler deeper seawater. This zone is called the thermocline layer […]. In tropical ocean waters the thermocline layer is a strong, well-defined and permanent feature. It may start at around 100 metres and be a hundred or so metres thick. Sea temperatures above the thermocline can be a tropical 25°C or more, but only 6–7°C just below the thermocline. From there the temperature drops very gradually with increasing depth. Thermoclines in temperate ocean regions are a more seasonal phenomenon, becoming well established in the summer as the sun heats up the surface waters, and then breaking down in the autumn and winter. Thermoclines are generally absent in the polar regions of the Global Ocean. […] As a rule of thumb, in the clearest ocean waters some light will penetrate to depths of 150-200 metres, with red light being absorbed within the first few metres and green and blue light penetrating the deepest. At certain times of the year in temperate coastal seas light may penetrate only a few tens of metres […] In the oceans, pressure increases by an additional atmosphere every 10 metres […] Thus, an organism living at a depth of 100 metres on the continental shelf experiences a pressure ten times greater than an organism living at sea level; a creature living at 5 kilometres depth on an abyssal plain experiences pressures some 500 times greater than at the surface”.

“With very few exceptions, dissolved oxygen is reasonably abundant throughout all parts of the Global Ocean. However, the amount of oxygen in seawater is much less than in air — seawater at 20°C contains about 5.4 millilitres of oxygen per litre of seawater, whereas air at this temperature contains about 210 millilitres of oxygen per litre. The colder the seawater, the more oxygen it contains […]. Oxygen is not distributed evenly with depth in the oceans. Oxygen levels are typically high in a thin surface layer 10–20 metres deep. Here oxygen from the atmosphere can freely diffuse into the seawater […] Oxygen concentration then decreases rapidly with depth and reaches very low levels, sometimes close to zero, at depths of around 200–1,000 metres. This region is referred to as the oxygen minimum zone […] This zone is created by the low rates of replenishment of oxygen diffusing down from the surface layer of the ocean, combined with the high rates of depletion of oxygen by decaying particulate organic matter that sinks from the surface and accumulates at these depths. Beneath the oxygen minimum zone, oxygen content increases again with depth such that the deep oceans contain quite high levels of oxygen, though not generally as high as in the surface layer. […] In contrast to oxygen, carbon dioxide (CO2) dissolves readily in seawater. Some of it is then converted into carbonic acid (H2CO3), bicarbonate ion (HCO3-), and carbonate ion (CO32-), with all four compounds existing in equilibrium with one another […] The pH of seawater is inversely proportional to the amount of carbon dioxide dissolved in it. […] the warmer the seawater, the less carbon dioxide it can absorb. […] Seawater is naturally slightly alkaline, with a pH ranging from about 7.5 to 8.5, and marine organisms have become well adapted to life within this stable pH range. […] In the oceans, carbon is never a limiting factor to marine plant photosynthesis and growth, as it is for terrestrial plants.”

“Since the beginning of the industrial revolution, the average pH of the Global Ocean has dropped by about 0.1 pH unit, making it 30 per cent more acidic than in pre-industrial times. […] As a result, more and more parts of the oceans are falling below a pH of 7.5 for longer periods of time. This trend, termed ocean acidification, is having profound impacts on marine organisms and the overall functioning of the marine ecosystem. For example, many types of marine organisms such as corals, clams, oysters, sea urchins, and starfish manufacture external shells or internal skeletons containing calcium carbonate. When the pH of seawater drops below about 7.5, calcium carbonate starts to dissolve, and thus the shells and skeletons of these organisms begin to erode and weaken, with obvious impacts on the health of the animal. Also, these organisms produce their calcium carbonate structures by combining calcium dissolved in seawater with carbonate ion. As the pH decreases, more of the carbonate ions in seawater become bound up with the increasing numbers of hydrogen ions, making fewer carbonate ions available to the organisms for shell-forming purposes. It thus becomes more difficult for these organisms to secrete their calcium carbonate structures and grow.”

“Roughly half of the planet’s primary production — the synthesis of organic compounds by chlorophyll-bearing organisms using energy from the sun—is produced within the Global Ocean. On land the primary producers are large, obvious, and comparatively long-lived — the trees, shrubs, and grasses characteristic of the terrestrial landscape. The situation is quite different in the oceans where, for the most part, the primary producers are minute, short-lived microorganisms suspended in the sunlit surface layer of the oceans. These energy-fixing microorganisms — the oceans’ invisible forest — are responsible for almost all of the primary production in the oceans. […] A large amount, perhaps 30-50 per cent, of marine primary production is produced by bacterioplankton comprising tiny marine photosynthetic bacteria ranging from about 0.5 to 2 μm in size. […] light availability and the strength of vertical mixing are important factors limiting primary production in the oceans. Nutrient availability is the other main factor limiting the growth of primary producers. One important nutrient is nitrogen […] nitrogen is a key component of amino acids, which are the building blocks of proteins. […] Photosynthetic marine organisms also need phosphorus, which is a requirement for many important biological functions, including the synthesis of nucleic acids, a key component of DNA. Phosphorus in the oceans comes naturally from the erosion of rocks and soils on land, and is transported into the oceans by rivers, much of it in the form of dissolved phosphate (PO43−), which can be readily absorbed by marine photosynthetic organisms. […] Inorganic nitrogen and phosphorus compounds are abundant in deep-ocean waters. […] In practice, inorganic nitrogen and phosphorus compounds are not used up at exactly the same rate. Thus one will be depleted before the other and becomes the limiting nutrient at the time, preventing further photosynthesis and growth of marine primary producers until it is replenished. Nitrogen is often considered to be the rate-limiting nutrient in most oceanic environments, particularly in the open ocean. However, in coastal waters phosphorus is often the rate-limiting nutrient.”

“The overall pattern of primary production in the Global Ocean depends greatly on latitude […] In polar oceans primary production is a boom-and-bust affair driven by light availability. Here the oceans are well mixed throughout the year so nutrients are rarely limiting. However, during the polar winter there is no light, and thus no primary production is taking place. […] Although limited to a short seasonal pulse, the total amount of primary production can be quite high, especially in the polar Southern Ocean […] In tropical open oceans, primary production occurs at a low level throughout the year. Here light is never limiting but the permanent tropical thermocline prevents the mixing of deep, nutrient-rich seawater with the surface waters. […] open-ocean tropical waters are often referred to as ‘marine deserts’, with productivity […] comparable to a terrestrial desert. In temperate open-ocean regions, primary productivity is linked closely to seasonal events. […] Although occurring in a number of pulses, primary productivity in temperate oceans [is] similar to [that of] a temperate forest or grassland. […] Some of the most productive marine environments occur in coastal ocean above the continental shelves. This is the result of a phenomenon known as coastal upwelling which brings deep, cold, nutrient-rich seawater to the ocean surface, creating ideal conditions for primary productivity […], comparable to a terrestrial rainforest or cultivated farmland. These hotspots of marine productivity are created by wind acting in concert with the planet’s rotation. […] Coastal upwelling can occur when prevailing winds move in a direction roughly parallel to the edge of a continent so as to create offshore Ekman transport. Coastal upwelling is particularly prevalent along the west coasts of continents. […] Since coastal upwelling is dependent on favourable winds, it tends to be a seasonal or intermittent phenomenon and the strength of upwelling will depend on the strength of the winds. […] Important coastal upwelling zones around the world include the coasts of California, Oregon, northwest Africa, and western India in the northern hemisphere; and the coasts of Chile, Peru, and southwest Africa in the southern hemisphere. These regions are amongst the most productive marine ecosystems on the planet.”

“Considering the Global Ocean as a whole, it is estimated that total marine primary production is about 50 billion tonnes of carbon per year. In comparison, the total production of land plants, which can also be estimated using satellite data, is estimated at around 52 billion tonnes per year. […] Primary production in the oceans is spread out over a much larger surface area and so the average productivity per unit of surface area is much smaller than on land. […] the energy of primary production in the oceans flows to higher trophic levels through several different pathways of various lengths […]. Some energy is lost along each step of the pathway — on average the efficiency of energy transfer from one trophic level to the next is about 10 per cent. Hence, shorter pathways are more efficient. Via these pathways, energy ultimately gets transferred to large marine consumers such as large fish, marine mammals, marine turtles, and seabirds.”

“…it has been estimated that in the 17th century, somewhere between fifty million and a hundred million green turtles inhabited the Caribbean Sea, but numbers are now down to about 300,000. Since their numbers are now so low, their impact on seagrass communities is currently small, but in the past, green turtles would have been extraordinarily abundant grazers of seagrasses. It appears that in the past, green turtles thinned out seagrass beds, thereby reducing direct competition among different species of seagrass and allowing several species of seagrass to coexist. Without green turtles in the system, seagrass beds are generally overgrown monocultures of one dominant species. […] Seagrasses are of considerable importance to human society. […] It is therefore of great concern that seagrass meadows are in serious decline globally. In 2003 it was estimated that 15 per cent of the planet’s existing seagrass beds had disappeared in the preceding ten years. Much of this is the result of increasing levels of coastal development and dredging of the seabed, activities which release excessive amounts of sediment into coastal waters which smother seagrasses. […] The number of marine dead zones in the Global Ocean has roughly doubled every decade since the 1960s”.

“Sea ice is habitable because, unlike solid freshwater ice, it is a very porous substance. As sea ice forms, tiny spaces between the ice crystals become filled with a highly saline brine solution resistant to freezing. Through this process a three-dimensional network of brine channels and spaces, ranging from microscopic to several centimetres in size, is created within the sea ice. These channels are physically connected to the seawater beneath the ice and become colonized by a great variety of marine organisms. A significant amount of the primary production in the Arctic Ocean, perhaps up to 50 per cent in those areas permanently covered by sea ice, takes place in the ice. […] Large numbers of zooplanktonic organisms […] swarm about on the under surface of the ice, grazing on the ice community at the ice-seawater interface, and sheltering in the brine channels. […] These under-ice organisms provide the link to higher trophic levels in the Arctic food web […] They are an important food source for fish such as Arctic cod and glacial cod that graze along the bottom of the ice. These fish are in turn fed on by squid, seals, and whales.”

“[T]he Antarctic marine system consists of a ring of ocean about 10° of latitude wide – roughly 1,000 km. […] The Arctic and Antarctic marine systems can be considered geographic opposites. In contrast to the largely landlocked Arctic Ocean, the Southern Ocean surrounds the Antarctic continental land mass and is in open contact with the Atlantic, Indian, and Pacific Oceans. Whereas the Arctic Ocean is strongly influenced by river inputs, the Antarctic continent has no rivers, and so hard-bottomed seabed is common in the Southern Ocean, and there is no low-saline surface layer, as in the Arctic Ocean. Also, in contrast to the Arctic Ocean with its shallow, broad continental shelves, the Antarctic continental shelf is very narrow and steep. […] Antarctic waters are extremely nutrient rich, fertilized by a permanent upwelling of seawater that has its origins at the other end of the planet. […] This continuous upwelling of cold, nutrient-rich seawater, in combination with the long Antarctic summer day length, creates ideal conditions for phytoplankton growth, which drives the productivity of the Antarctic marine system. As in the Arctic, a well-developed sea-ice community is present. Antarctic ice algae are even more abundant and productive than in the Arctic Ocean because the sea ice is thinner, and there is thus more available light for photosynthesis. […] Antarctica’s most important marine species [is] the Antarctic krill […] Krill are very adept at surviving many months under starvation conditions — in the laboratory they can endure more than 200 days without food. During the winter months they lower their metabolic rate, shrink in body size, and revert back to a juvenile state. When food once again becomes abundant in the spring, they grow rapidly […] As the sea ice breaks up they leave the ice and begin feeding directly on the huge blooms of free-living diatoms […]. With so much food available they grow and reproduce quickly, and start to swarm in large numbers, often at densities in excess of 10,000 individuals per cubic metre — dense enough to colour the seawater a reddish-brown. Krill swarms are patchy and vary greatly in size […] Because the Antarctic marine system covers a large area, krill numbers are enormous, estimated at about 600 billion animals on average, or 500 million tonnes of krill. This makes Antarctic krill one of the most abundant animal species on the planet […] Antarctic krill are the main food source for many of Antarctica’s large marine animals, and a key link in a very short and efficient food chain […]. Krill comprise the staple diet of icefish, squid, baleen whales, leopard seals, fur seals, crabeater seals, penguins, and seabirds, including albatross. Thus, a very simple and efficient three-step food chain is in operation — diatoms eaten by krill in turn eaten by a suite of large consumers — which supports the large numbers of large marine animals living in the Southern Ocean.”

Links:

Ocean gyre. North Atlantic Gyre. Thermohaline circulation. North Atlantic Deep Water. Antarctic bottom water.
Cyanobacteria. Diatom. Dinoflagellate. Coccolithophore.
Trophic level.
Nitrogen fixation.
High-nutrient, low-chlorophyll regions.
Light and dark bottle method of measuring primary productivity. Carbon-14 method for estimating primary productivity.
Ekman spiral.
Peruvian anchoveta.
El Niño. El Niño–Southern Oscillation.
Copepod.
Dissolved organic carbon. Particulate organic matter. Microbial loop.
Kelp forest. Macrocystis. Sea urchin. Urchin barren. Sea otter.
Seagrass.
Green sea turtle.
Manatee.
Demersal fish.
Eutrophication. Harmful algal bloom.
Comb jelly. Asterias amurensis.
Great Pacific garbage patch.
Eelpout. Sculpin.
Polynya.
Crabeater seal.
Adélie penguin.
Anchor ice mortality.

March 13, 2018 Posted by | Biology, Books, Botany, Chemistry, Ecology, Geology, Zoology | Leave a comment

Quotes

i. “One ground for suspicion of apparently sincere moral convictions is their link with some special interest of those who hold them. The questions cui bono and cui malo are appropriate questions to raise when we are searching for possible contaminants of conscience. Entrenched privilege, and fear of losing it, distorts one’s moral sense.” (Annette Baier)

ii. “Most people do not listen with the intent to understand; they listen with the intent to reply.” (Stephen Covey)

iii. “Plastic surgery is a way for people to buy themselves a few years before they have to truly confront what ageing is, which of course is not that your looks are falling apart, but that you are falling apart and some-day you will have fallen apart and ceased to exist.” (Nora Ephron)

iv. “Just because you know a thing is true in theory, doesn’t make it true in fact. The barbaric religions of primitive worlds hold not a germ of scientific fact, though they claim to explain all. Yet if one of these savages has all the logical ground for his beliefs taken away — he doesn’t stop believing. He then calls his mistaken beliefs ‘faith’ because he knows they are right. And he knows they are right because he has faith. This is an unbreakable circle of false logic that can’t be touched. In reality, it is plain mental inertia.” (Harry Harrison)

v. “A taste is almost defined as a preference about which you do not argue — de gustibus non est disputandum. A taste about which you argue, with others or yourself, ceases ipso facto being a taste – it turns into a value.” (Albert O. Hirschman)

vi. “I will be ashamed the day I feel I should knuckle under to social-political pressures about issues and research I think are important for the advance of scientific knowledge.” (Arthur Jensen)

vii. “My theory is that we are all idiots. The people who don’t think they’re idiots — they’re the ones that are dangerous.” (Erik Sykes)

viii. “What you get by achieving your goals is not as important as what you become by achieving your goals.” (Zig Ziglar)

ix. “If you go looking for a friend, you’re going to find they’re very scarce. If you go out to be a friend, you’ll find them everywhere.” (-ll-)

x. “The rights of individuals to the use of resources (i.e., property rights) in any society are to be construed as supported by the force of etiquette, social custom, ostracism, and formal legally enacted laws supported by the states’ power of violence of punishment. Many of the constraints on the use of what we call private property involve the force of etiquette and social ostracism. The level of noise, the kind of clothes we wear, our intrusion on other people’s privacy are restricted not merely by laws backed by police force, but by social acceptance, reciprocity, and voluntary social ostracism for violators of accepted codes of conduct.” (Armen Alchian)

xi. “Whenever undiscussables exist, their existence is also undiscussable. Moreover, both are covered up, because rules that make important issues undiscussables violate espoused norms…” (Chris Argyris)

xii. “Experience can be merely the repetition of […the? – US] same error often enough.” (John Azzopardi)

xiii. “Empathize with stupidity and you’re halfway to thinking like an idiot.” (Ian Banks)

xiv. “A man in daily muddy contact with field experiments could not be expected to have much faith in any direct assumption of independently distributed normal errors.” (George E. P. Box)

xv. “There is nothing that makes the mind more elastic and expandable than discovering how the world works.” (Edgar Bronfman, Sr.)

xvi. “I don’t give advice. I can’t tell anybody what to do. Instead I say this is what we know about this problem at this time. And here are the consequences of these actions.” (Joyce Diane Brothers)

xvii. “Don’t fool yourself that you are going to have it all. You are not. Psychologically, having it all is not even a valid concept. The marvelous thing about human beings is that we are perpetually reaching for the stars. The more we have, the more we want. And for this reason, we never have it all.” (-ll-)

xviii. “We control fifty percent of a relationship. We influence one hundred percent of it.” (-ll-)

xix. “Being taken for granted can be a compliment. It means that you’ve become a comfortable, trusted element in another person’s life.” (-ll-)

xx. “The world at large does not judge us by who we are and what we know; it judges us by what we have.” (-ll-)

March 5, 2018 Posted by | Quotes/aphorisms | Leave a comment

Words

The words included in this post are words which I encountered while reading Patrick O’Brian’s books Post Captain and HMS Surprise. As was also the case the last time I posted one of these posts, I had to include ~100 words, instead of the ~80 I have come to consider ‘the standard’ for these posts, in order to include all the words of interest which I encountered in the books.

MésallianceMansuetude. Wen. Raffish. Stave. Gorse. Lurcher. Improvidence/improvident. Sough. Bowse. Mump. Jib. Tipstaff. Squalid. Strum. Hussif. Dowdy. Cognoscent. Footpad. Quire.

Vacillation. Wantonness. Escritoire/scrutoire. Mantua. Shindy. Vinous. Top-hamper. Holystone. Keelson. Bollard/bitts. Wicket. Paling. Brace (sailing). Coxcomb. Foin. Stern chaser. Galliot. Postillion. Coot. Fanfaronade.

Malversation. Arenaceous. Tope. Shebeen. Lithotomy. Quoin/coign. Mange. Curricle. Cockade. Spout. Bistoury. Embrasure. Acushla. Circumambulation. Glabrous. Impressment. Transpierce. Dilatoriness. Conglobate. Murrain.

Anfractuous/anfractuosity. Conversible. Tunny. Weevil. Posset. Sponging-house. Salmagundi. Hugger-mugger. Euphroe. Jobbery. Dun. Privity. Intension. Shaddock. Catharpin. Peccary. Tarpaulin. Frap. Bombinate. Spirketing.

Glacis. Gymnosophist. Fibula. Dreary. Barouche. Syce. Carmine. Lustration. Rood. Timoneer. Crosstrees. Luff. Mangosteeen. Methitic. Superfetation. Pledget. Innominate. Jibboom. Pilau. Ataraxy.

February 27, 2018 Posted by | Books, Language | Leave a comment

The Ice Age (II)

I really liked the book, recommended if you’re at all interested in this kind of stuff. Below some observations from the book’s second half, and some related links:

“Charles MacLaren, writing in 1842, […] argued that the formation of large ice sheets would result in a fall in sea level as water was taken from the oceans and stored frozen on the land. This insight triggered a new branch of ice age research – sea level change. This topic can get rather complicated because as ice sheets grow, global sea level falls. This is known as eustatic sea level change. As ice sheets increase in size, their weight depresses the crust and relative sea level will rise. This is known as isostatic sea level change. […] It is often quite tricky to differentiate between regional-scale isostatic factors and the global-scale eustatic sea level control.”

“By the late 1870s […] glacial geology had become a serious scholarly pursuit with a rapidly growing literature. […] [In the late 1880s] Carvill Lewis […] put forward the radical suggestion that the [sea] shells at Moel Tryfan and other elevated localities (which provided the most important evidence for the great marine submergence of Britain) were not in situ. Building on the earlier suggestions of Thomas Belt (1832–78) and James Croll, he argued that these materials had been dredged from the sea bed by glacial ice and pushed upslope so that ‘they afford no testimony to the former subsidence of the land’. Together, his recognition of terminal moraines and the reworking of marine shells undermined the key pillars of Lyell’s great marine submergence. This was a crucial step in establishing the primacy of glacial ice over icebergs in the deposition of the drift in Britain. […] By the end of the 1880s, it was the glacial dissenters who formed the eccentric minority. […] In the period leading up to World War One, there was [instead] much debate about whether the ice age involved a single phase of ice sheet growth and freezing climate (the monoglacial theory) or several phases of ice sheet build up and decay separated by warm interglacials (the polyglacial theory).”

“As the Earth rotates about its axis travelling through space in its orbit around the Sun, there are three components that change over time in elegant cycles that are entirely predictable. These are known as eccentricity, precession, and obliquity or ‘stretch, wobble, and roll’ […]. These orbital perturbations are caused by the gravitational pull of the other planets in our Solar System, especially Jupiter. Milankovitch calculated how each of these orbital cycles influenced the amount of solar radiation received at different latitudes over time. These are known as Milankovitch Cycles or Croll–Milankovitch Cycles to reflect the important contribution made by both men. […] The shape of the Earth’s orbit around the Sun is not constant. It changes from an almost circular orbit to one that is mildly elliptical (a slightly stretched circle) […]. This orbital eccentricity operates over a 400,000- and 100,000-year cycle. […] Changes in eccentricity have a relatively minor influence on the total amount of solar radiation reaching the Earth, but they are important for the climate system because they modulate the influence of the precession cycle […]. When eccentricity is high, for example, axial precession has a greater impact on seasonality. […] The Earth is currently tilted at an angle of 23.4° to the plane of its orbit around the Sun. Astronomers refer to this axial tilt as obliquity. This angle is not fixed. It rolls back and forth over a 41,000-year cycle from a tilt of 22.1° to 24.5° and back again […]. Even small changes in tilt can modify the strength of the seasons. With a greater angle of tilt, for example, we can have hotter summers and colder winters. […] Cooler, reduced insolation summers are thought to be a key factor in the initiation of ice sheet growth in the middle and high latitudes because they allow more snow to survive the summer melt season. Slightly warmer winters may also favour ice sheet build-up as greater evaporation from a warmer ocean will increase snowfall over the centres of ice sheet growth. […] The Earth’s axis of rotation is not fixed. It wobbles like a spinning top slowing down. This wobble traces a circle on the celestial sphere […]. At present the Earth’s rotational axis points toward Polaris (the current northern pole star) but in 11,000 years it will point towards another star, Vega. This slow circling motion is known as axial precession and it has important impacts on the Earth’s climate by causing the solstices and equinoxes to move around the Earth’s orbit. In other words, the seasons shift over time. Precession operates over a 19,000- and 23,000-year cycle. This cycle is often referred to as the Precession of the Equinoxes.”

The albedo of a surface is a measure of its ability to reflect solar energy. Darker surfaces tend to absorb most of the incoming solar energy and have low albedos. The albedo of the ocean surface in high latitudes is commonly about 10 per cent — in other words, it absorbs 90 per cent of the incoming solar radiation. In contrast, snow, glacial ice, and sea ice have much higher albedos and can reflect between 50 and 90 per cent of incoming solar energy back into the atmosphere. The elevated albedos of bright frozen surfaces are a key feature of the polar radiation budget. Albedo feedback loops are important over a range of spatial and temporal scales. A cooling climate will increase snow cover on land and the extent of sea ice in the oceans. These high albedo surfaces will then reflect more solar radiation to intensify and sustain the cooling trend, resulting in even more snow and sea ice. This positive feedback can play a major role in the expansion of snow and ice cover and in the initiation of a glacial phase. Such positive feedbacks can also work in reverse when a warming phase melts ice and snow to reveal dark and low albedo surfaces such as peaty soil or bedrock.”

“At the end of the Cretaceous, around 65 million years ago (Ma), lush forests thrived in the Polar Regions and ocean temperatures were much warmer than today. This warm phase continued for the next 10 million years, peaking during the Eocene thermal maximum […]. From that time onwards, however, Earth’s climate began a steady cooling that saw the initiation of widespread glacial conditions, first in Antarctica between 40 and 30 Ma, in Greenland between 20 and 15 Ma, and then in the middle latitudes of the northern hemisphere around 2.5 Ma. […] Over the past 55 million years, a succession of processes driven by tectonics combined to cool our planet. It is difficult to isolate their individual contributions or to be sure about the details of cause and effect over this long period, especially when there are uncertainties in dating and when one considers the complexity of the climate system with its web of internal feedbacks.” [Potential causes which have been highlighted include: The uplift of the Himalayas (leading to increased weathering, leading over geological time to an increased amount of CO2 being sequestered in calcium carbonate deposited on the ocean floor, lowering atmospheric CO2 levels), the isolation of Antarctica which created the Antarctic Circumpolar Current (leading to a cooling of Antarctica), the dry-out of the Mediterranean Sea ~5mya (which significantly lowered salt concentrations in the World Ocean, meaning that sea water froze at a higher temperature), and the formation of the Isthmus of Panama. – US].

“[F]or most of the last 1 million years, large ice sheets were present in the middle latitudes of the northern hemisphere and sea levels were lower than today. Indeed, ‘average conditions’ for the Quaternary Period involve much more ice than present. The interglacial peaks — such as the present Holocene interglacial, with its ice volume minima and high sea level — are the exception rather than the norm. The sea level maximum of the Last Interglacial (MIS 5) is higher than today. It also shows that cold glacial stages (c.80,000 years duration) are much longer than interglacials (c.15,000 years). […] Arctic willow […], the northernmost woody plant on Earth, is found in central European pollen records from the last glacial stage. […] For most of the Quaternary deciduous forests have been absent from most of Europe. […] the interglacial forests of temperate Europe that are so familiar to us today are, in fact, rather atypical when we consider the long view of Quaternary time. Furthermore, if the last glacial period is representative of earlier ones, for much of the Quaternary terrestrial ecosystems were continuously adjusting to a shifting climate.”

“Greenland ice cores typically have very clear banding […] that corresponds to individual years of snow accumulation. This is because the snow that falls in summer under the permanent Arctic sun differs in texture to the snow that falls in winter. The distinctive paired layers can be counted like tree rings to produce a finely resolved chronology with annual and even seasonal resolution. […] Ice accumulation is generally much slower in Antarctica, so the ice core record takes us much further back in time. […] As layers of snow become compacted into ice, air bubbles recording the composition of the atmosphere are sealed in discrete layers. This fossil air can be recovered to establish the changing concentration of greenhouse gases such as carbon dioxide (CO2) and methane (CH4). The ice core record therefore allows climate scientists to explore the processes involved in climate variability over very long timescales. […] By sampling each layer of ice and measuring its oxygen isotope composition, Dansgaard produced an annual record of air temperature for the last 100,000 years. […] Perhaps the most startling outcome of this work was the demonstration that global climate could change extremely rapidly. Dansgaard showed that dramatic shifts in mean air temperature (>10°C) had taken place in less than a decade. These findings were greeted with scepticism and there was much debate about the integrity of the Greenland record, but subsequent work from other drilling sites vindicated all of Dansgaard’s findings. […] The ice core records from Greenland reveal a remarkable sequence of abrupt warming and cooling cycles within the last glacial stage. These are known as Dansgaard–Oeschger (D–O) cycles. […] [A] series of D–O cycles between 65,000 and 10,000 years ago [caused] mean annual air temperatures on the Greenland ice sheet [to be] shifted by as much as 10°C. Twenty-five of these rapid warming events have been identified during the last glacial period. This discovery dispelled the long held notion that glacials were lengthy periods of stable and unremitting cold climate. The ice core record shows very clearly that even the glacial climate flipped back and forth. […] D–O cycles commence with a very rapid warming (between 5 and 10°C) over Greenland followed by a steady cooling […] Deglaciations are rapid because positive feedbacks speed up both the warming trend and ice sheet decay. […] The ice core records heralded a new era in climate science: the study of abrupt climate change. Most sedimentary records of ice age climate change yield relatively low resolution information — a thousand years may be packed into a few centimetres of marine or lake sediment. In contrast, ice cores cover every year. They also retain a greater variety of information about the ice age past than any other archive. We can even detect layers of volcanic ash in the ice and pinpoint the date of ancient eruptions.”

“There are strong thermal gradients in both hemispheres because the low latitudes receive the most solar energy and the poles the least. To redress these imbalances the atmosphere and oceans move heat polewards — this is the basis of the climate system. In the North Atlantic a powerful surface current takes warmth from the tropics to higher latitudes: this is the famous Gulf Stream and its northeastern extension the North Atlantic Drift. Two main forces drive this current: the strong southwesterly winds and the return flow of colder, saltier water known as North Atlantic Deep Water (NADW). The surface current loses much of its heat to air masses that give maritime Europe a moist, temperate climate. Evaporative cooling also increases its salinity so that it begins to sink. As the dense and cold water sinks to the deep ocean to form NADW, it exerts a strong pull on the surface currents to maintain the cycle. It returns south at depths >2,000 m. […] The thermohaline circulation in the North Atlantic was periodically interrupted during Heinrich Events when vast discharges of melting icebergs cooled the ocean surface and reduced its salinity. This shut down the formation of NADW and suppressed the Gulf Stream.”

Links:

Archibald Geikie.
Andrew Ramsay (geologist).
Albrecht Penck. Eduard BrücknerGunz glaciation. Mindel glaciation. Riss glaciation. Würm.
Insolation.
Perihelion and aphelion.
Deep Sea Drilling Project.
Foraminifera.
δ18O. Isotope fractionation.
Marine isotope stage.
Cesare Emiliani.
Nicholas Shackleton.
Brunhes–Matuyama reversal. Geomagnetic reversal. Magnetostratigraphy.
Climate: Long range Investigation, Mapping, and Prediction (CLIMAP).
Uranium–thorium dating. Luminescence dating. Optically stimulated luminescence. Cosmogenic isotope dating.
The role of orbital forcing in the Early-Middle Pleistocene Transition (paper).
European Project for Ice Coring in Antarctica (EPICA).
Younger Dryas.
Lake Agassiz.
Greenland ice core project (GRIP).
J Harlen Bretz. Missoula Floods.
Pleistocene megafauna.

February 25, 2018 Posted by | Astronomy, Engineering, Geology, History, Paleontology, Physics | Leave a comment

Sieve methods: what are they, and what are they good for?

Given the nature of the lecture it was difficult to come up with relevant links to include in this post, but these seemed relevant enough to include them here:

Sieve theory.
Inclusion–exclusion principle.
Fundamental lemma of sieve theory.
Parity problem (sieve theory).
Viggo Brun (the lecturer mentions along the way that many of the things he talks about in this lecture are things this guy figured out, but the wiki article is unfortunately very short).

As he notes early on, when working with sieves we’re: “*Interested in objects which are output of some inclusion-exclusion process & *Rather than counting precisely, we want to gain good bounds, but work flexibly.”

‘Counting’ should probably be interpreted loosely here, in the general scheme of things; sieves are mostly used in number theory, but as Maynard mentions presumably similar methods can be used in other mathematical contexts – thus the deliberate use of the word ‘objects’. It seems to be all about trying to ascertain some properties about some objects/sets/whatever, without necessarily imposing much structure (‘are we within the right order of magnitude?’ rather than ‘did we get them all?’). The basic idea behind restricting the amount of structure imposed is, as far as I gathered from the lecture, to make the problem you’re faced with more tractable.

February 24, 2018 Posted by | Lectures, Mathematics | Leave a comment

The Ice Age (I)

I’m currently reading this book. Some observations and links related to the first half of the book below:

“It is important to appreciate from the outset that the Quaternary ice age was not one long episode of unremitting cold climate. […] By exploring the landforms, sediments, and fossils of the Quaternary Period we can identify glacials: periods of severe cold climate when great ice sheets formed in the high middle latitudes of the northern hemisphere and glaciers and ice caps advanced in mountain regions around the world. We can also recognize periods of warm climate known as interglacials when mean air temperatures in the middle latitudes were comparable to, and sometimes higher than, those of the present. As the climate shifted from glacial to interglacial mode, the large ice sheets of Eurasia and North America retreated allowing forest biomes to re-colonize the ice free landscapes. It is also important to recognize that the ice age isn’t just about advancing and retreating ice sheets. Major environmental changes also took place in the Mediterranean region and in the tropics. The Sahara, for example, became drier, cooler, and dustier during glacial periods yet early in the present interglacial it was a mosaic of lakes and oases with tracts of lush vegetation. A defining feature of the Quaternary Period is the repeated fluctuation in climate as conditions shifted from glacial to interglacial, and back again, during the course of the last 2.5 million years or so. A key question in ice age research is why does the Earth’s climate system shift so dramatically and so frequently?”

“Today we have large ice masses in the Polar Regions, but a defining feature of the Quaternary is the build-up and decay of continental-scale ice sheets in the high middle latitudes of the northern hemisphere. […] the Laurentide and Cordilleran ice sheets […] covered most of Canada and large tracts of the northern USA during glacial stages. Around 22,000 years ago, when the Laurentide ice sheet reached its maximum extent during the most recent glacial stage, it was considerably larger in both surface area and volume (34.8 million km3) than the present-day East and West Antarctic ice sheets combined (27 million km3). With a major ice dome centred on Hudson Bay greater than 4 km thick, it formed the largest body of ice on Earth. This great mass of ice depressed the crust beneath its bed by many hundreds of metres. Now shed of this burden, the crust is still slowly recovering today at rates of up to 1 cm per year. Glacial ice extended out beyond the 38th parallel across the lowland regions of North America. Chicago, Boston, and New York all lie on thick glacial deposits left by the Laurentide ice sheet. […] With huge volumes of water locked up in the ice sheets, global sea level was about 120 m lower than present at the Last Glacial Maximum (LGM), exposing large expanses of continental shelf and creating land bridges that allowed humans, animals, and plants to move between continents. Migration from eastern Russia to Alaska, for example, was possible via the Bering land bridge.”

“Large ice sheets also developed in Europe. […] The British Isles lie in an especially sensitive location on the Atlantic fringe of Europe between latitudes 50 and 60° north. Because of this geography, the Quaternary deposits of Britain record especially dramatic shifts in environmental conditions. The most extensive glaciation saw ice sheets extend as far south as the Thames Valley with wide braided rivers charged with meltwater and sediment from the ice margin. Beyond the glacial ice much of southern Britain would have been a treeless, tundra steppe environment with tracts of permanently frozen ground […]. At the LGM […] [t]he Baltic and North Seas were dry land and Britain was connected to mainland Europe. Beyond the British and Scandinavian ice sheets, much of central and northern Europe was a treeless tundra steppe habitat. […] During warm interglacial stages […] [b]road-leaved deciduous woodland with grassland was the dominant vegetation […]. In the warmest parts of interglacials thermophilous […] insects from the Mediterranean were common in Britain whilst the large mammal fauna of the Last Interglacial (c.130,000 to 115,000 years ago) included even more exotic species such as the short tusked elephant, rhinoceros, and hippopotamus. In some interglacials, the rivers of southern Britain contained molluscs that now live in the Nile Valley. For much of the Quaternary, however, climate would have been in an intermediate state (either warming or cooling) between these glacial and interglacial extremes.”

“Glaciologists make a distinction between three main types of glacier (valley glaciers, ice caps, and ice sheets) on the basis of scale and topographic setting. A glacier is normally constrained by the surrounding topography such as a valley and has a clearly defined source area. An ice cap builds up as a dome-like form on a high plateau or mountain peak and may feed several outlet glaciers to valleys below. Ice sheets notionally exceed 50,000 km2 and are not constrained by topography.”

“We live in unusual times. For more than 90 per cent of its 4.6-billion-year history, Earth has been too warm — even at the poles — for ice sheets to form. Ice ages are not the norm for our planet. Periods of sustained (over several million years) large-scale glaciation can be called glacial epochs. Tillites in the geological record tells us that the Quaternary ice age is just one of at least six great glacial epochs that have taken place over the last three billion years or so […]. The Quaternary itself is the culmination of a much longer glacial epoch that began around 35 million years ago (Ma) when glaciers and ice sheets first formed in Antarctica. This is known as the Cenozoic glacial epoch. There is still much to learn about these ancient glacial epochs, especially the so-called Snowball Earth states of the Precambrian (before 542 Ma) when the boundary conditions for the global climate system were so different to those of today. […] This book is concerned with the Quaternary ice age – it has the richest and most varied records of environmental change. Because its sediments are so recent they have not been subjected to millions of years of erosion or deep burial and metamorphism. […] in aquatic settings, such as lakes and peat bogs, organic materials such as insects, leaves, and seeds, as well as microfossils such as pollen and fungal spores can be exceptionally well preserved in the fossil record. This allows us to create very detailed pictures of past ecosystems under glacial and interglacial conditions. This field of research is known as Quaternary paeloecology.”

“An erratic […] is a piece of rock that has been transported from its place of origin. […] Many erratics stand out because they lie on bedrock that is very different to their source. […] Erratics are normally associated with transport by glaciers or ice sheets, but in the early 19th century mechanisms such as the great deluge or rafting on icebergs were commonly invoked. […] Enormous erratic boulders […] were well known to 18th- and 19th-centery geologists. […] Their origin was a source of lively and protracted debate […] Early observers of Alpine glaciers had noted the presence of large boulders on the surface of active glaciers or forming part of the debris pile at the glacier snout. These were readily explainable, but erratic boulders had long been noted in locations that defied rational explanations. The erratics found at elevations far above their known sources, and in places such as Britain where glaciers were absent, were especially problematic for early students of landscape history. […] A huge deluge […] was commonly invoked to explain the disposition of such boulders and many saw them as more hard evidence in support of the Biblical flood. […] At this time, the Church of England held a strong influence over much of higher education and especially so in Cambridge and Oxford.”

Venetz [in the early 19th century] produced remarkably detailed topographic maps of lateral and terminal moraines that lay far down valley of the modern glaciers. He was able to show that many glaciers had advanced and retreated in the historical period. His was the first systematic analysis of climate-glacier-landscape interactions. […] In 1821, Venetz presented his findings to the Société Helvétiques des Sciences Naturelles, setting out Perraudin’s ideas alongside his own. The paper had little impact, however, and would not see publication until 1833. […] Jean de Charpentier [in his work] paid particular attention to the disposition of large erratic blocks and the occurrence of polished and striated bedrock surfaces in the deep valleys of western Switzerland. A major step forward was Charpentier’s recognition of a clear relationship between the elevation of the erratic blocks in the Rhône Valley and the vertical extent of glacially smoothed rock walls. He noted that the bedrock valley sides above the erratic blocks were not worn smooth because they must have been above the level of the ancient glacier surface. The rock walls below the erratics always bore the hallmarks of contact with glacial ice. We call this boundary the trimline. It is often clearly marked in hard bedrock because the texture of the valley sides above the glacier surface is fractured due to attack by frost weathering. The detachment of rock particles above the trimline adds debris to lateral moraines and the glacier surface. These insights allowed Charpentier to reconstruct the vertical extent of former glaciers for the first time. Venetz and Perraudin had already shown how to demarcate the length and width of glaciers using the terminal and lateral moraines in these valleys. Charpentier described some of the most striking erratic boulders in the Alps […]. As Charpentier mapped the giant erratics, polished bedrock surfaces, and moraines in the Rhône Valley, it became clear to him that the valley must once have been occupied by a truly enormous glacier or ‘glacier-monstre’ as he called it. […] In 1836, Charpentier published a key paper setting out the main findings of their [his and Venetz’] glacial work”.

“Even before Charpentier was thinking about large ice masses in Switzerland, Jens Esmark (1763-1839) […] had suggested that northern European glaciers had been much more extensive in the past and were responsible for the transport of large erratic boulders and the formation of moraines. Esmark also recognized the key role of deep bedrock erosion by glacial ice in the formation of the spectacular Norwegian fjords. He worked out that glaciers in Norway had once extended down to sea level. Esmark’s ideas were […] translated into English and published […] in 1826, a decade in advance of Charpentier’s paper. Esmark discussed a large body of evidence pointing to an extensive glaciation of northern Europe. […] his thinking was far in advance of his contemporaries […] Unfortunately, even Esmark’s carefully argued paper held little sway in Britain and elsewhere […] it would be many decades before there was general acceptance within the geological community that glaciers could spread out across low gradient landscapes. […] in the lecture theatres and academic societies of Paris, Berlin, and London, the geological establishment was slow to take up these ideas, even though they were published in both English and French and were widely available. Much of the debate in the 1820s and early 1830s centred on the controversy over the evolution of valleys between the fluvialists (Hutton, Playfair, and others), who advocated slow river erosion, and the diluvialists (Buckland, De la Beche, and others) who argued that big valleys and large boulders needed huge deluges. The role of glaciers in valley and fjord formation was not considered. […] The key elements of a glacial theory were in place but nobody was listening. […] It would be decades before a majority accepted that vast tracts of Eurasia and North America had once been covered by mighty ice sheets.”

“Most geologists in 1840 saw Agassiz’s great ice sheet as a retrograde step. It was just too catastrophist — a blatant violation of hard-won uniformitarian principles. It was the antithesis of the new rational geology and was not underpinned by carefully assembled field data. So, for many, as an explanation for the superficial deposits of the Quaternary, it was no more convincing than the deluge. […] Ancient climates were [also] supposed to be warmer not colder. The suggestion of a freezing glacial epoch in the recent geological past, followed by the temperate climate of the present, still jarred with the conventional wisdom that Earth history, from its juvenile molten state to the present, was an uninterrupted record of long-term cooling without abrupt change. Lyell’s drift ice theory [according to which erratics (and till) had been transported by icebergs drifting in water, instead of glaciers transporting the material over land – US] also provided an attractive alternative to Agassiz’s ice age because it did not demand a period of cold glacial climate in areas that now enjoy temperate conditions. […] If anything, the 1840 sessions at the Geological Society had galvanized support for floating ice as a mechanism for drift deposition in the lowlands. Lyell’s model proved to be remarkably resilient—its popularity proved to be the major obstacle to the wider adoption of the land ice theory. […] many refused to believe that glacier ice could advance across gently sloping lowland terrain. This was a reasonable objection at this time since the ice sheets of Greenland and Antarctica had not yet been investigated from a glaciological point of view. It is not difficult to understand why many British geologists rejected the glacial theory when the proximity and potency of the sea was so obvious and nobody knew how large ice sheets behaved.”

Hitchcock […] was one of the first Americans to publicly embrace Agassiz’s ideas […] but he later stepped back from a full endorsement, leaving a role for floating ice. This hesitant beginning set the tone for the next few decades in North America as its geologists began to debate whether they could see the work of ice sheets or icebergs. There was a particularly strong tradition of scriptural geology in 19th-century North America. Its practitioners attempted to reconcile their field observations with the Bible and there were often close links with like-minded souls in Britain. […] If the standing of Lyell extended the useful lifespan of the iceberg theory, it was gradually worn down by a growing body of field evidence from Europe and North America that pointed to the action of glacier ice. […] The continental glacial theory prevailed in North America because it provided a much better explanation for the vast majority of the features recorded in the landscape. The striking regularity and fixed alignment of many features could not be the work of icebergs whose wanderings were governed by winds and ocean currents. The southern limit of the glacial deposits is often marked by pronounced ridges in an otherwise low-relief landscape. These end moraines mark the edge of the former ice sheet and they cannot be formed by floating ice. It took a long time to put all the pieces of evidence together in North America because of the vast scale of the territory to be mapped. Once the patterns of erratic dispersal, large-scale scratching of bedrock, terminal moraines, drumlin fields, and other features were mapped, their systematic arrangement argued strongly against the agency of drifting ice. Unlike their counterparts in Britain, who were never very far from the sea, geologists working deep in the continental interior of North America found it much easier to dismiss the idea of a great marine submergence. Furthermore, icebergs just did not transport enough sediment to account for the enormous extent and great thickness of the Quaternary deposits. It was also realized that icebergs were just not capable of planing off hard bedrock to create plateau surfaces. Neither were they able to polish, scratch, or cut deep grooves into ancient bedrock. All these features pointed to the action of land-based glacial ice. Slowly, but surely, the reality of vast expanses of glacier ice covering much of Canada and the northern states of the USA became apparent.”

Links:

Quaternary.
The Parallel Roads of Glen Roy.
William Boyd Dawkins.
Adams mammoth.
Georges Cuvier.
Cryosphere.
Cirque (geology). Arête. Tarn. Moraine. Drumlin. Till/Tillite. Glacier morphology.
James Hutton.
William Buckland.
Diluvium.
Charles Lyell.
Giétro Glacier.
Cwm Idwal.
Timothy Abbott Conrad. Charles Whittlesey. James Dwight Dana.

February 23, 2018 Posted by | Books, Ecology, Geography, Geology, History, Paleontology | Leave a comment

A few (more) diabetes papers of interest

Earlier this week I covered a couple of papers, but the second paper turned out to include a lot of interesting stuff so I decided to cut the post short and postpone my coverage of the other papers I’d intended to cover in that post until a later point in time; this post includes some of those other papers I’d intended to cover in that post.

i. TCF7L2 Genetic Variants Contribute to Phenotypic Heterogeneity of Type 1 Diabetes.

“Although the autoimmune destruction of β-cells has a major role in the development of type 1 diabetes, there is growing evidence that the differences in clinical, metabolic, immunologic, and genetic characteristics among patients (1) likely reflect diverse etiology and pathogenesis (2). Factors that govern this heterogeneity are poorly understood, yet these may have important implications for prognosis, therapy, and prevention.

The transcription factor 7 like 2 (TCF7L2) locus contains the single nucleotide polymorphism (SNP) most strongly associated with type 2 diabetes risk, with an ∼30% increase per risk allele (3). In a U.S. cohort, heterozygous and homozygous carriers of the at-risk alleles comprised 40.6% and 7.9%, respectively, of the control subjects and 44.3% and 18.3%, respectively, of the individuals with type 2 diabetes (3). The locus has no known association with type 1 diabetes overall (48), with conflicting reports in latent autoimmune diabetes in adults (816). […] Our studies in two separate cohorts have shown that the type 2 diabetes–associated TCF7L2 genetic variant is more frequent among specific subsets of individuals with autoimmune type 1 diabetes, specifically those with fewer markers of islet autoimmunity (22,23). These observations support a role of this genetic variant in the pathogenesis of diabetes at least in a subset of individuals with autoimmune diabetes. However, whether individuals with type 1 diabetes and this genetic variant have distinct metabolic abnormalities has not been investigated. We aimed to study the immunologic and metabolic characteristics of individuals with type 1 diabetes who carry a type 2 diabetes–associated allele of the TCF7L2 locus.”

“We studied 810 TrialNet participants with newly diagnosed type 1 diabetes and found that among individuals 12 years and older, the type 2 diabetes–associated TCF7L2 genetic variant is more frequent in those presenting with a single autoantibody than in participants who had multiple autoantibodies. These TCF7L2 variants were also associated with higher mean C-peptide AUC and lower mean glucose AUC levels at the onset of type 1 diabetes. […] These findings suggest that, besides the well-known link with type 2 diabetes, the TCF7L2 locus may play a role in the development of type 1 diabetes. The type 2 diabetes–associated TCF7L2 genetic variant identifies a subset of individuals with autoimmune type 1 diabetes and fewer markers of islet autoimmunity, lower glucose, and higher C-peptide at diagnosis. […] A possible interpretation of these data is that TCF7L2-encoded diabetogenic mechanisms may contribute to diabetes development in individuals with limited autoimmunity […]. Because the risk of progression to type 1 diabetes is lower in individuals with single compared with multiple autoantibodies, it is possible that in the absence of this type 2 diabetes–associated TCF7L2 variant, these individuals may have not manifested diabetes. If that is the case, we would postulate that disease development in these patients may have a type 2 diabetes–like pathogenesis in which islet autoimmunity is a significant component but not necessarily the primary driver.”

“The association between this genetic variant and single autoantibody positivity was present in individuals 12 years or older but not in children younger than 12 years. […] The results in the current study suggest that the type 2 diabetes–associated TCF7L2 genetic variant plays a larger role in older individuals. There is mounting evidence that the pathogenesis of type 1 diabetes varies by age (31). Younger individuals appear to have a more aggressive form of disease, with faster decline of β-cell function before and after onset of disease, higher frequency and severity of diabetic ketoacidosis, which is a clinical correlate of severe insulin deficiency, and lower C-peptide at presentation (3135). Furthermore, older patients are less likely to have type 1 diabetes–associated HLA alleles and islet autoantibodies (28). […] Taken together, we have demonstrated that individuals with autoimmune type 1 diabetes who carry the type 2 diabetes–associated TCF7L2 genetic variant have a distinct phenotype characterized by milder immunologic and metabolic characteristics than noncarriers, closer to those of type 2 diabetes, with an important effect of age.”

ii. Heart Failure: The Most Important, Preventable, and Treatable Cardiovascular Complication of Type 2 Diabetes.

“Concerns about cardiovascular disease in type 2 diabetes have traditionally focused on atherosclerotic vasculo-occlusive events, such as myocardial infarction, stroke, and limb ischemia. However, one of the earliest, most common, and most serious cardiovascular disorders in patients with diabetes is heart failure (1). Following its onset, patients experience a striking deterioration in their clinical course, which is marked by frequent hospitalizations and eventually death. Many sudden deaths in diabetes are related to underlying ventricular dysfunction rather than a new ischemic event. […] Heart failure and diabetes are linked pathophysiologically. Type 2 diabetes and heart failure are each characterized by insulin resistance and are accompanied by the activation of neurohormonal systems (norepinephrine, angiotensin II, aldosterone, and neprilysin) (3). The two disorders overlap; diabetes is present in 35–45% of patients with chronic heart failure, whether they have a reduced or preserved ejection fraction.”

“Treatments that lower blood glucose do not exert any consistently favorable effect on the risk of heart failure in patients with diabetes (6). In contrast, treatments that increase insulin signaling are accompanied by an increased risk of heart failure. Insulin use is independently associated with an enhanced likelihood of heart failure (7). Thiazolidinediones promote insulin signaling and have increased the risk of heart failure in controlled clinical trials (6). With respect to incretin-based secretagogues, liraglutide increases the clinical instability of patients with existing heart failure (8,9), and the dipeptidyl peptidase 4 inhibitors saxagliptin and alogliptin are associated with an increased risk of heart failure in diabetes (10). The likelihood of heart failure with the use of sulfonylureas may be comparable to that with thiazolidinediones (11). Interestingly, the only two classes of drugs that ameliorate hyperinsulinemia (metformin and sodium–glucose cotransporter 2 inhibitors) are also the only two classes of antidiabetes drugs that appear to reduce the risk of heart failure and its adverse consequences (12,13). These findings are consistent with experimental evidence that insulin exerts adverse effects on the heart and kidneys that can contribute to heart failure (14). Therefore, physicians can prevent many cases of heart failure in type 2 diabetes by careful consideration of the choice of agents used to achieve glycemic control. Importantly, these decisions have an immediate effect; changes in risk are seen within the first few months of changes in treatment. This immediacy stands in contrast to the years of therapy required to see a benefit of antidiabetes drugs on microvascular risk.”

“As reported by van den Berge et al. (4), the prognosis of patients with heart failure has improved over the past two decades; heart failure with a reduced ejection fraction is a treatable disease. Inhibitors of the renin-angiotensin system are a cornerstone of the management of both disorders; they prevent the onset of heart failure and the progression of nephropathy in patients with diabetes, and they reduce the risk of cardiovascular death and hospitalization in those with established heart failure (3,15). Diabetes does not influence the magnitude of the relative benefit of ACE inhibitors in patients with heart failure, but patients with diabetes experience a greater absolute benefit from treatment (16).”

“The totality of evidence from randomized trials […] demonstrates that in patients with diabetes, heart failure is not only common and clinically important, but it can also be prevented and treated. This conclusion is particularly significant because physicians have long ignored heart failure in their focus on glycemic control and their concerns about the ischemic macrovascular complications of diabetes (1).”

iii. Closely related to the above study: Mortality Reduction Associated With β-Adrenoceptor Inhibition in Chronic Heart Failure Is Greater in Patients With Diabetes.

“Diabetes increases mortality in patients with chronic heart failure (CHF) and reduced left ventricular ejection fraction. Studies have questioned the safety of β-adrenoceptor blockers (β-blockers) in some patients with diabetes and reduced left ventricular ejection fraction. We examined whether β-blockers and ACE inhibitors (ACEIs) are associated with differential effects on mortality in CHF patients with and without diabetes. […] We conducted a prospective cohort study of 1,797 patients with CHF recruited between 2006 and 2014, with mean follow-up of 4 years.”

RESULTS Patients with diabetes were prescribed larger doses of β-blockers and ACEIs than were patients without diabetes. Increasing β-blocker dose was associated with lower mortality in patients with diabetes (8.9% per mg/day; 95% CI 5–12.6) and without diabetes (3.5% per mg/day; 95% CI 0.7–6.3), although the effect was larger in people with diabetes (interaction P = 0.027). Increasing ACEI dose was associated with lower mortality in patients with diabetes (5.9% per mg/day; 95% CI 2.5–9.2) and without diabetes (5.1% per mg/day; 95% CI 2.6–7.6), with similar effect size in these groups (interaction P = 0.76).”

“Our most important findings are:

  • Higher-dose β-blockers are associated with lower mortality in patients with CHF and LVSD, but patients with diabetes may derive more benefit from higher-dose β-blockers.

  • Higher-dose ACEIs were associated with comparable mortality reduction in people with and without diabetes.

  • The association between higher β-blocker dose and reduced mortality is most pronounced in patients with diabetes who have more severely impaired left ventricular function.

  • Among patients with diabetes, the relationship between β-blocker dose and mortality was not associated with glycemic control or insulin therapy.”

“We make the important observation that patients with diabetes may derive more prognostic benefit from higher β-blocker doses than patients without diabetes. These data should provide reassurance to patients and health care providers and encourage careful but determined uptitration of β-blockers in this high-risk group of patients.”

iv. Diabetes, Prediabetes, and Brain Volumes and Subclinical Cerebrovascular Disease on MRI: The Atherosclerosis Risk in Communities Neurocognitive Study (ARIC-NCS).

“Diabetes and prediabetes are associated with accelerated cognitive decline (1), and diabetes is associated with an approximately twofold increased risk of dementia (2). Subclinical brain pathology, as defined by small vessel disease (lacunar infarcts, white matter hyperintensities [WMH], and microhemorrhages), large vessel disease (cortical infarcts), and smaller brain volumes also are associated with an increased risk of cognitive decline and dementia (37). The mechanisms by which diabetes contributes to accelerated cognitive decline and dementia are not fully understood, but contributions of hyperglycemia to both cerebrovascular disease and primary neurodegenerative disease have been suggested in the literature, although results are inconsistent (2,8). Given that diabetes is a vascular risk factor, brain atrophy among individuals with diabetes may be driven by increased cerebrovascular disease. Brain magnetic resonance imaging (MRI) provides a noninvasive opportunity to study associations of hyperglycemia with small vessel disease (lacunar infarcts, WMH, microhemorrhages), large vessel disease (cortical infarcts), and brain volumes (9).”

“Overall, the mean age of participants [(n = 1,713)] was 75 years, 60% were women, 27% were black, 30% had prediabetes (HbA1c 5.7 to <6.5%), and 35% had diabetes. Compared with participants without diabetes and HbA1c <5.7%, those with prediabetes (HbA1c 5.7 to <6.5%) were of similar age (75.2 vs. 75.0 years; P = 0.551), were more likely to be black (24% vs. 11%; P < 0.001), have less than a high school education (11% vs. 7%; P = 0.017), and have hypertension (71% vs. 63%; P = 0.012) (Table 1). Among participants with diabetes, those with HbA1c <7.0% versus ≥7.0% were of similar age (75.4 vs. 75.1 years; P = 0.481), but those with diabetes and HbA1c ≥7.0% were more likely to be black (39% vs. 28%; P = 0.020) and to have less than a high school education (23% vs. 16%; P = 0.031) and were more likely to have a longer duration of diabetes (12 vs. 8 years; P < 0.001).”

“Compared with participants without diabetes and HbA1c <5.7%, those with diabetes and HbA1c ≥7.0% had smaller total brain volume (β −0.20 SDs; 95% CI −0.31, −0.09) and smaller regional brain volumes, including frontal, temporal, occipital, and parietal lobes; deep gray matter; Alzheimer disease signature region; and hippocampus (all P < 0.05) […]. Compared with participants with diabetes and HbA1c <7.0%, those with diabetes and HbA1c ≥7.0% had smaller total brain volume (P < 0.001), frontal lobe volume (P = 0.012), temporal lobe volume (P = 0.012), occipital lobe volume (P = 0.008), parietal lobe volume (P = 0.015), deep gray matter volume (P < 0.001), Alzheimer disease signature region volume (0.031), and hippocampal volume (P = 0.016). Both participants with diabetes and HbA1c <7.0% and those with prediabetes (HbA1c 5.7 to <6.5%) had similar total and regional brain volumes compared with participants without diabetes and HbA1c <5.7% (all P > 0.05). […] No differences in the presence of lobar microhemorrhages, subcortical microhemorrhages, cortical infarcts, and lacunar infarcts were observed among the diabetes-HbA1c categories (all P > 0.05) […]. Compared with participants without diabetes and HbA1c <5.7%, those with diabetes and HbA1c ≥7.0% had increased WMH volume (P = 0.016). The WMH volume among participants with diabetes and HbA1c ≥7.0% was also significantly greater than among those with diabetes and HbA1c <7.0% (P = 0.017).”

“Those with diabetes duration ≥10 years were older than those with diabetes duration <10 years (75.9 vs. 75.0 years; P = 0.041) but were similar in terms of race and sex […]. Compared with participants with diabetes duration <10 years, those with diabetes duration ≥10 years has smaller adjusted total brain volume (β −0.13 SDs; 95% CI −0.20, −0.05) and smaller temporal lobe (β −0.14 SDs; 95% CI −0.24, −0.03), parietal lobe (β − 0.11 SDs; 95% CI −0.21, −0.01), and hippocampal (β −0.16 SDs; 95% CI −0.30, −0.02) volumes […]. Participants with diabetes duration ≥10 years also had a 2.44 times increased odds (95% CI 1.46, 4.05) of lacunar infarcts compared with those with diabetes duration <10 years”.

Conclusions
In this community-based population, we found that ARIC-NCS participants with diabetes with HbA1c ≥7.0% have smaller total and regional brain volumes and an increased burden of WMH, but those with prediabetes (HbA1c 5.7 to <6.5%) and diabetes with HbA1c <7.0% have brain volumes and markers of subclinical cerebrovascular disease similar to those without diabetes. Furthermore, among participants with diabetes, those with more-severe disease (as measured by higher HbA1c and longer disease duration) had smaller total and regional brain volumes and an increased burden of cerebrovascular disease compared with those with lower HbA1c and shorter disease duration. However, we found no evidence that associations of diabetes with smaller brain volumes are mediated by cerebrovascular disease.

The findings of this study extend the current literature that suggests that diabetes is strongly associated with brain volume loss (11,2527). Global brain volume loss (11,2527) has been consistently reported, but associations of diabetes with smaller specific brain regions have been less robust (27,28). Similar to prior studies, the current results show that compared with individuals without diabetes, those with diabetes have smaller total brain volume (11,2527) and regional brain volumes, including frontal and occipital lobes, deep gray matter, and the hippocampus (25,27). Furthermore, the current study suggests that greater severity of disease (as measured by HbA1c and diabetes duration) is associated with smaller total and regional brain volumes. […] Mechanisms whereby diabetes may contribute to brain volume loss include accelerated amyloid-β and hyperphosphorylated tau deposition as a result of hyperglycemia (29). Another possible mechanism involves pancreatic amyloid (amylin) infiltration of the brain, which then promotes amyloid-β deposition (29). […] Taken together, […] the current results suggest that diabetes is associated with both lower brain volumes and increased cerebrovascular pathology (WMH and lacunes).”

v. Interventions to increase attendance for diabetic retinopathy screening (Cochrane review).

“The primary objective of the review was to assess the effectiveness of quality improvement (QI) interventions that seek to increase attendance for DRS in people with type 1 and type 2 diabetes.

Secondary objectives were:
To use validated taxonomies of QI intervention strategies and behaviour change techniques (BCTs) to code the description of interventions in the included studies and determine whether interventions that include particular QI strategies or component BCTs are more effective in increasing screening attendance;
To explore heterogeneity in effect size within and between studies to identify potential explanatory factors for variability in effect size;
To explore differential effects in subgroups to provide information on how equity of screening attendance could be improved;
To critically appraise and summarise current evidence on the resource use, costs and cost effectiveness.”

“We included 66 RCTs conducted predominantly (62%) in the USA. Overall we judged the trials to be at low or unclear risk of bias. QI strategies were multifaceted and targeted patients, healthcare professionals or healthcare systems. Fifty-six studies (329,164 participants) compared intervention versus usual care (median duration of follow-up 12 months). Overall, DRS [diabetic retinopathy screening] attendance increased by 12% (risk difference (RD) 0.12, 95% confidence interval (CI) 0.10 to 0.14; low-certainty evidence) compared with usual care, with substantial heterogeneity in effect size. Both DRS-targeted (RD 0.17, 95% CI 0.11 to 0.22) and general QI interventions (RD 0.12, 95% CI 0.09 to 0.15) were effective, particularly where baseline DRS attendance was low. All BCT combinations were associated with significant improvements, particularly in those with poor attendance. We found higher effect estimates in subgroup analyses for the BCTs ‘goal setting (outcome)’ (RD 0.26, 95% CI 0.16 to 0.36) and ‘feedback on outcomes of behaviour’ (RD 0.22, 95% CI 0.15 to 0.29) in interventions targeting patients, and ‘restructuring the social environment’ (RD 0.19, 95% CI 0.12 to 0.26) and ‘credible source’ (RD 0.16, 95% CI 0.08 to 0.24) in interventions targeting healthcare professionals.”

“Ten studies (23,715 participants) compared a more intensive (stepped) intervention versus a less intensive intervention. In these studies DRS attendance increased by 5% (RD 0.05, 95% CI 0.02 to 0.09; moderate-certainty evidence).”

“Overall, we found that there is insufficient evidence to draw robust conclusions about the relative cost effectiveness of the interventions compared to each other or against usual care.”

“The results of this review provide evidence that QI interventions targeting patients, healthcare professionals or the healthcare system are associated with meaningful improvements in DRS attendance compared to usual care. There was no statistically significant difference between interventions specifically aimed at DRS and those which were part of a general QI strategy for improving diabetes care.”

vi. Diabetes in China: Epidemiology and Genetic Risk Factors and Their Clinical Utility in Personalized Medication.

“The incidence of type 2 diabetes (T2D) has rapidly increased over recent decades, and T2D has become a leading public health challenge in China. Compared with European descents, Chinese patients with T2D are diagnosed at a relatively young age and low BMI. A better understanding of the factors contributing to the diabetes epidemic is crucial for determining future prevention and intervention programs. In addition to environmental factors, genetic factors contribute substantially to the development of T2D. To date, more than 100 susceptibility loci for T2D have been identified. Individually, most T2D genetic variants have a small effect size (10–20% increased risk for T2D per risk allele); however, a genetic risk score that combines multiple T2D loci could be used to predict the risk of T2D and to identify individuals who are at a high risk. […] In this article, we review the epidemiological trends and recent progress in the understanding of T2D genetic etiology and further discuss personalized medicine involved in the treatment of T2D.”

“Over the past three decades, the prevalence of diabetes in China has sharply increased. The prevalence of diabetes was reported to be less than 1% in 1980 (2), 5.5% in 2001 (3), 9.7% in 2008 (4), and 10.9% in 2013, according to the latest published nationwide survey (5) […]. The prevalence of diabetes was higher in the senior population, men, urban residents, individuals living in economically developed areas, and overweight and obese individuals. The estimated prevalence of prediabetes in 2013 was 35.7%, which was much higher than the estimate of 15.5% in the 2008 survey. Similarly, the prevalence of prediabetes was higher in the senior population, men, and overweight and obese individuals. However, prediabetes was more prevalent in rural residents than in urban residents. […] the 2013 survey also compared the prevalence of diabetes among different races. The crude prevalence of diabetes was 14.7% in the majority group, i.e., Chinese Han, which was higher than that in most minority ethnic groups, including Tibetan, Zhuang, Uyghur, and Muslim. The crude prevalence of prediabetes was also higher in the Chinese Han ethnic group. The Tibetan participants had the lowest prevalence of diabetes and prediabetes (4.3% and 31.3%).”

“[T]he prevalence of diabetes in young people is relatively high and increasing. The prevalence of diabetes in the 20- to 39-year age-group was 3.2%, according to the 2008 national survey (4), and was 5.9%, according to the 2013 national survey (5). The prevalence of prediabetes also increased from 9.0% in 2008 to 28.8% in 2013 […]. Young people suffering from diabetes have a higher risk of chronic complications, which are the major cause of mortality and morbidity in diabetes. According to a study conducted in Asia (6), patients with young-onset diabetes had higher mean concentrations of HbA1c and LDL cholesterol and a higher prevalence of retinopathy (20% vs. 18%, P = 0.011) than those with late-onset diabetes. In the Chinese, patients with early-onset diabetes had a higher risk of nonfatal cardiovascular disease (7) than did patients with late-onset diabetes (odds ratio [OR] 1.91, 95% CI 1.81–2.02).”

“As approximately 95% of patients with diabetes in China have T2D, the rapid increase in the prevalence of diabetes in China may be attributed to the increasing rates of overweight and obesity and the reduction in physical activity, which is driven by economic development, lifestyle changes, and diet (3,11). According to a series of nationwide surveys conducted by the China Physical Fitness Surveillance Center (12), the prevalence of overweight (BMI ≥23.0 to <27.5 kg/m2) in Chinese adults aged 20–59 years increased from 37.4% in 2000 to 39.2% in 2005, 40.7% in 2010, and 41.2% in 2014, with an estimated increase of 0.27% per year. The prevalence of obesity (BMI ≥27.5 kg/m2) increased from 8.6% in 2000 to 10.3% in 2005, 12.2% in 2010, and 12.9% in 2014, with an estimated increase of 0.32% per year […]. The prevalence of central obesity increased from 13.9% in 2000 to 18.3% in 2005, 22.1% in 2010, and 24.9% in 2014, with an estimated increase of 0.78% per year. Notably, T2D develops at a considerably lower BMI in the Chinese population than that in European populations. […] The relatively high risk of diabetes at a lower BMI could be partially attributed to the tendency toward visceral adiposity in East Asian populations, including the Chinese population (13). Moreover, East Asian populations have been found to have a higher insulin sensitivity with a much lower insulin response than European descent and African populations, implying a lower compensatory β-cell function, which increases the risk of progressing to overt diabetes (14).”

“Over the past two decades, linkage analyses, candidate gene approaches, and large-scale GWAS have successfully identified more than 100 genes that confer susceptibility to T2D among the world’s major ethnic populations […], most of which were discovered in European populations. However, less than 50% of these European-derived loci have been successfully confirmed in East Asian populations. […] there is a need to identify specific genes that are associated with T2D in other ethnic populations. […] Although many genetic loci have been shown to confer susceptibility to T2D, the mechanism by which these loci participate in the pathogenesis of T2D remains unknown. Most T2D loci are located near genes that are related to β-cell function […] most single nucleotide polymorphisms (SNPs) contributing to the T2D risk are located in introns, but whether these SNPs directly modify gene expression or are involved in linkage disequilibrium with unknown causal variants remains to be investigated. Furthermore, the loci discovered thus far collectively account for less than 15% of the overall estimated genetic heritability.”

“The areas under the receiver operating characteristic curves (AUCs) are usually used to assess the discriminative accuracy of an approach. The AUC values range from 0.5 to 1.0, where an AUC of 0.5 represents a lack of discrimination and an AUC of 1 represents perfect discrimination. An AUC ≥0.75 is considered clinically useful. The dominant conventional risk factors, including age, sex, BMI, waist circumference, blood pressure, family history of diabetes, physical activity level, smoking status, and alcohol consumption, can be combined to construct conventional risk factor–based models (CRM). Several studies have compared the predictive capacities of models with and without genetic information. The addition of genetic markers to a CRM could slightly improve the predictive performance. For example, one European study showed that the addition of an 11-SNP GRS to a CRM marginally improved the risk prediction (AUC was 0.74 without and 0.75 with the genetic markers, P < 0.001) in a prospective cohort of 16,000 individuals (37). A meta-analysis (38) consisting of 23 studies investigating the predictive performance of T2D risk models also reported that the AUCs only slightly increased with the addition of genetic information to the CRM (median AUC was increased from 0.78 to 0.79). […] Despite great advances in genetic studies, the clinical utility of genetic information in the prediction, early identification, and prevention of T2D remains in its preliminary stage.”

“An increasing number of studies have highlighted that early nutrition has a persistent effect on the risk of diabetes in later life (40,41). China’s Great Famine of 1959–1962 is considered to be the largest and most severe famine of the 20th century […] Li et al. (43) found that offspring of mothers exposed to the Chinese famine have a 3.9-fold increased risk of diabetes or hyperglycemia as adults. A more recent study (the Survey on Prevalence in East China for Metabolic Diseases and Risk Factors [SPECT-China]) conducted in 2014, among 6,897 adults from Shanghai, Jiangxi, and Zhejiang provinces, had the same conclusion that famine exposure during the fetal period (OR 1.53, 95% CI 1.09–2.14) and childhood (OR 1.82, 95% CI 1.21–2.73) was associated with diabetes (44). These findings indicate that undernutrition during early life increases the risk of hyperglycemia in adulthood and this association is markedly exaggerated when facing overnutrition in later life.”

February 23, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Health Economics, Immunology, Medicine, Neurology, Ophthalmology, Pharmacology, Studies | Leave a comment

Endocrinology (part 5 – calcium and bone metabolism)

Some observations from chapter 6:

“*Osteoclasts – derived from the monocytic cells; resorb bone. *Osteoblasts – derived from the fibroblast-like cells; make bone. *Osteocytes – buried osteoblasts; sense mechanical strain in bone. […] In order to ensure that bone can undertake its mechanical and metabolic functions, it is in a constant state of turnover […] Bone is laid down rapidly during skeletal growth at puberty. Following this, there is a period of stabilization of bone mass in early adult life. After the age of ~40, there is a gradual loss of bone in both sexes. This occurs at the rate of approximately 0.5% annually. However, in ♀ after the menopause, there is a period of rapid bone loss. The accelerated loss is maximal in the first 2-5 years after the cessation of ovarian function and then gradually declines until the previous gradual rate of loss is once again established. The excess bone loss associated with the menopause is of the order of 10% of skeletal mass. This menopause-associated loss, coupled with higher peak bone mass acquisition in ♂, largely explains why osteoporosis and its associated fractures are more common in ♀.”

“The clinical utility of routine measurements of bone turnover markers is not yet established. […] Skeletal radiology[:] *Useful for: *Diagnosis of fracture. *Diagnosis of specific diseases (e.g. Paget’s disease and osteomalacia). *Identification of bone dysplasia. *Not useful for assessing bone density. […] Isotope bone scans are useful for identifying localized areas of bone disease, such as fracture, metastases, or Paget’s disease. […] Isotope bone scans are particularly useful in Paget’s disease to establish the extent and sites of skeletal involvement and the underlying disease activity. […] Bone biopsy is occasionally necessary for the diagnosis of patients with complex metabolic bone diseases. […] Bone biopsy is not indicated for the routine diagnosis of osteoporosis. It should only be undertaken in highly specialist centres with appropriate expertise. […] Measurement of 24h urinary excretion of calcium provides a measure of risk of renal stone formation or nephrocalcinosis in states of chronic hypercalcaemia. […] 250H vitamin D […] is the main storage form of vitamin D, and the measurement of ‘total vitamin D’ is the most clinically useful measure of vitamin D status. Internationally, there remains controversy around a ‘normal’ or ‘optimal’ concentration of vitamin D. Levels over 50nmol/L are generally accepted as satisfactory and values <25nmol/L representing deficiency. True osteomalacia occurs with vitamin D values <15 nmol/L. Low levels of 250HD can result from a variety of causes […] Bone mass is quoted in terms of the number of standard deviations from an expected mean. […] A reduction of one SD in bone density will approximately double the risk of fracture.”

[I should perhaps add a cautionary note here that while this variable is very useful in general, it is more useful in some contexts than in others; and in some specific disease process contexts it is quite clear that it will tend to underestimate the fracture risk. Type 1 diabetes is a clear example. For more details, see this post.]

“Hypercalcaemia is found in 5% of hospital patients and in 0.5% of the general population. […] Many different disease states can lead to hypercalcaemia. […] In asymptomatic community-dwelling subjects, the vast majority of hypercalcaemia is the result of hyperparathyroidism. […] The clinical features of hypercalcaemia are well recognized […]; unfortunately, they are non-specific […] [They include:] *Polyuria. *Polydipsia. […] *Anorexia. *Vomiting. *Constipation. *Abdominal pain. […] *Confusion. *Lethargy. *Depression. […] Clinical signs of hypercalcaemia are rare. […] the presence of bone pain or fracture and renal stones […] indicate the presence of chronic hypercalcaemia. […] Hypercalcaemia is usually a late manifestation of malignant disease, and the primary lesion is usually evident by the time hypercalcaemia is expressed (50% of patients die within 30 days).”

“Primary hyperparathyroidism [is] [p]resent in up to 1 in 500 of the general population where it is predominantly a disease of post-menopausal ♀ […] The normal physiological response to hypocalcaemia is an increase in PTH secretion. This is termed 2° hyperparathyroidism and is not pathological in as much as the PTH secretion remains under feedback control. Continued stimulation of the parathyroid glands can lead to autonomous production of PTH. This, in turn, causes hypercalcaemia which is termed tertiary hyperparathyroidism. This is usually seen in the context of renal disease […] In majority of patients [with hyperparathyroidism] without end-organ damage, disease is benign and stable. […] Investigation is, therefore, primarily aimed at determining the presence of end-organ damage from hypercalcaemia in order to determine whether operative intervention is indicated. […] It is generally accepted that all patients with symptomatic hyperparathyroidism or evidence of end-organ damage should be considered for parathyroidectomy. This would include: *Definite symptoms of hypercalcaemia. […] *Impaired renal function. *Renal stones […] *Parathyroid bone disease, especially osteitis fibrosis cystica. *Pancreatitis. […] Patients not managed with surgery require regular follow-up. […] <5% fail to become normocalcaemic [after surgery], and these should be considered for a second operation. […] Patients rendered permanently hypoparathyroid by surgery require lifelong supplements of active metabolites of vitamin D with calcium. This can lead to hypercalciuria, and the risk of stone formation may still be present in these patients. […] In hypoparathyroidism, the target serum calcium should be at the low end of the reference range. […] any attempt to raise the plasma calcium well into the normal range is likely to result in unacceptable hypercalciuria”.

“Although hypocalcaemia can result from failure of any of the mechanisms by which serum calcium concentration is maintained, it is usually the result of either failure of PTH secretion or because of the inability to release calcium from bone. […] The clinical features of hypocalcaemia are largely as a result of neuromuscular excitability. In order of  severity, these include: *Tingling – especially of fingers, toes, or lips. *Numbness – especially of fingers, toes, or lips. *Cramps. *Carpopedal spasm. *Stridor due to laryngospasm. *Seizures. […] symptoms of hypocalcaemia tend to reflect the severity and rapidity of onset of the metabolic abnormality. […] there may be clinical signs and symptoms associated with the underlying condition: *Vitamin D deficiency may be associated with generalized bone pain, fractures, or proximal myopathy […] *Hypoparathyroidism can be accompanied by mental slowing and personality disturbances […] *If hypocalcaemia is present during the development of permanent teeth, these may show areas of enamel hypoplasia. This can be a useful physical sign, indicating that the hypocalcaemia is long-standing. […] Acute symptomatic hypocalcaemia is a medical emergency and demands urgent treatment whatever the cause […] *Patients with tetany or seizures require urgent IV treatment with calcium gluconate […] Care must be taken […] as too rapid elevation of the plasma calcium can cause arrhythmias. […] *Treatment of chronic hypocalcaemia is more dependent on the cause. […] In patients with mild parathyroid dysfunction, it may be possible to achieve acceptable calcium concentrations by using calcium supplements alone. […] The majority of patients will not achieve adequate control with such treatment. In those cases, it is necessary to use vitamin D or its metabolites in pharmacological doses to maintain plasma calcium.”

“Pseudohypoparathyroidism[:] *Resistance to parathyroid hormone action. *Due to defective signalling of PTH action via cell membrane receptor. *Also affects TSH, LH, FSH, and GH signalling. […] Patients with the most common type of pseudohypoparathyroidism (type 1a) have a characteristic set of skeletal abnormalities, known as Albright’s hereditary osteodystrophy. This comprises: *Short stature. *Obesity. *Round face. *Short metacarpals. […] The principles underlying the treatment of pseudohypoparathyroidism are the same as those underlying hypoparathyroidism. *Patients with the most common form of pseudohypoparathyroidism may have resistance to the action of other hormones which rely on G protein signalling. They, therefore, need to be assessed for thyroid and gonadal dysfunction (because of defective TSH and gonadotrophin action). If these deficiencies are present, they need to be treated in the conventional manner.”

“Osteomalacia occurs when there is inadequate mineralization of mature bone. Rickets is a disorder of the growing skeleton where there is inadequate mineralization of bone as it is laid down at the epiphysis. In most instances, osteomalacia leads to build-up of excessive unmineralized osteoid within the skeleton. In rickets, there is build-up of unmineralized osteoid in the growth plate. […] These two related conditions may coexist. […] Clinical features [of osteomalacia:] *Bone pain. *Deformity. *Fracture. *Proximal myopathy. *Hypocalcaemia (in vitamin D deficiency). […] The majority of patients with osteomalacia will show no specific radiological abnormalities. *The most characteristic abnormality is the Looser’s zone or pseudofracture. If these are present, they are virtually pathognomonic of osteomalacia. […] Oncogenic osteomalacia[:] Certain tumours appear to be able to produce FGF23 which is phosphaturic. This is rare […] Clinically, such patients usually present with profound myopathy as well as bone pain and fracture. […] Complete removal of the tumour results in resolution of the biochemical and skeletal abnormalities. If this is not possible […], treatment with vitamin D metabolites and phosphate supplements […] may help the skeletal symptoms.”

Hypophosphataemia[:] Phosphate is important for normal mineralization of bone. In the absence of sufficient phosphate, osteomalacia results. […] In addition, phosphate is important in its own right for neuromuscular function, and profound hypophosphataemia can be accompanied by encephalopathy, muscle weakness, and cardiomyopathy. It must be remembered that, as phosphate is primarily an intracellular anion, a low plasma phosphate does not necessarily represent actual phosphate depletion. […] Mainstay [of treatment] is phosphate replacement […] *Long-term administration of phosphate supplements stimulates parathyroid activity. This can lead to hypercalcaemia, a further fall in phosphate, with worsening of the bone disease […] To minimize parathyroid stimulation, it is usual to give one of the active metabolites of vitamin D in conjunction with phosphate.”

“Although the term osteoporosis refers to the reduction in the amount of bony tissue within the skeleton, this is generally associated with a loss of structural integrity of the internal architecture of the bone. The combination of both these changes means that osteoporotic bone is at high risk of fracture, even after trivial injury. […] Historically, there has been a primary reliance on bone mineral density as a threshold for treatment, whereas currently there is far greater emphasis on assessing individual patients’ risk of fracture that incorporates multiple clinical risk factors as well as bone mineral density. […] Osteoporosis may arise from a failure of the body to lay down sufficient bone during growth and maturation; an earlier than usual onset of bone loss following maturity; or an rate of that loss. […] Early menopause or late puberty (in ♂ or ♀) is associated with risk of osteoporosis. […] Lifestyle factors affecting bone mass [include:] *weight-bearing exercise [increase bone mass] […] *Smoking. *Excessive alcohol. *Nulliparity. *Poor calcium nutrition. [These all decrease bone mass] […] The risk of osteoporotic fracture increases with age. Fracture rates in ♂ are approximately half of those seen in ♀ of the same age. An ♀ aged 50 has approximately a 1:2 chance [risk, surely… – US] of sustaining an osteoporotic fracture in the rest of her life. The corresponding figure for a ♂ is 1:5. […] One-fifth of hip fracture victims will die within 6 months of the injury, and only 50% will return to their previous level of independence.”

“Any fracture, other than those affecting fingers, toes, or face, which is caused by a fall from standing height or less is called a fragility (low-trauma) fracture, and underlying osteoporosis should be considered. Patients suffering such a fracture should be considered for investigation and/or treatment for osteoporosis. […] [Osteoporosis is] [u]sually clinically silent until an acute fracture. *Two-thirds of vertebral fractures do not come to clinical attention. […] Osteoporotic vertebral fractures only rarely lead to neurological impairment. Any evidence of spinal cord compression should prompt a search for malignancy or other underlying cause. […] Osteoporosis does not cause generalized skeletal pain. […] Biochemical markers of bone turnover may be helpful in the calculation of fracture risk and in judging the response to drug therapies, but they have no role in the diagnosis of osteoporosis. […] An underlying cause for osteoporosis is present in approximately 10-30% of women and up to 50% of men with osteoporosis. […] 2° causes of osteoporosis are more common in ♂ and need to be excluded in all ♂ with osteoporotic fracture. […] Glucocorticoid treatment is one of the major 2° causes of osteoporosis.”

February 22, 2018 Posted by | Books, Cancer/oncology, Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Pharmacology | Leave a comment

Words

The words below are mostly words I encountered while reading Wolfe’s The Claw of the Conciliator and O’Brian’s Master and Commander. I wanted to finish off my ‘coverage’ of those books here, so I decided to include a few more words than usual (the post includes ~100 words, instead of the usual ~80).

Threnody. Noctilucent. Dell. Cariole. Rick. Campanile. Obeisance. Cerbotana. Caloyer. Mitre. Orpiment. Tribade/tribadism (NSFW words?). Thiasus. Argosy. Partridge. Cenotaph. Seneschal. Ossifrage. Faille. Calotte.

Meretrice. Bijou. Espalier. Gramary. Jennet. Algophilia/algophilist. Clerestory. Liquescent. Pawl. Lenitive. Bream. Bannister. Jacinth. Inimical. Grizzled. Trabacalo. Xebec. Suet. Stanchion. Beadle.

Philomath. Gaby. Purser. Tartan. Eparterial. Otiose. Cryptogam. Puncheon. Neume. Cully. Carronade. Becket. Belay. Capstan. Nacreous. Fug. Cosset. Roborative. Comminatory. Strake.

Douceur. Bowsprit. Orlop. Turbot. Luffing. Sempiternal. Tompion. Loblolly (boy). Felucca. Genet. Steeve. Gremial. Epicene. Quaere. Mumchance. Hance. Divertimento. Halliard. Gleet. Rapparee.

Prepotent. Tramontana. Hecatomb. Inveteracy. Davit. Vaticination/vaticinatory. Trundle. Antinomian. Scunner. Shay. Demulcent. Wherry. Cullion. Hemidemisemiquaver. Cathead. Cordage. Kedge. Clew. Semaphore. Tumblehome.

February 21, 2018 Posted by | Books, Language | Leave a comment

A few diabetes papers of interest

(I hadn’t expected to only cover two papers in this post, but the second paper turned out to include a lot of stuff I figured might be worth adding here. I might add another post later this week including some of the other studies I had intended to cover in this post.)

i. Burden of Mortality Attributable to Diagnosed Diabetes: A Nationwide Analysis Based on Claims Data From 65 Million People in Germany.

“Diabetes is among the 10 most common causes of death worldwide (2). Between 1990 and 2010, the number of deaths attributable to diabetes has doubled (2). People with diabetes have a reduced life expectancy of ∼5 to 6 years (3). The most common cause of death in people with diabetes is cardiovascular disease (3,4). Over the past few decades, a reduction of diabetes mortality has been observed in several countries (59). However, the excess risk of death is still higher than in the population without diabetes, particularly in younger age-groups (4,9,10). Unfortunately, in most countries worldwide, reliable data on diabetes mortality are lacking (1). In a few European countries, such as Denmark (5) and Sweden (4), mortality analyses are based on national diabetes registries that include all age-groups. However, Germany and many other European countries do not have such national registries. Until now, age-standardized hazard ratios for diabetes mortality between 1.4 and 2.6 have been published for Germany on the basis of regional studies and surveys with small respondent numbers (1114). To the best of our knowledge, no nationwide estimates of the number of excess deaths due to diabetes have been published for Germany, and no information on older age-groups >79 years is currently available.

In 2012, changes in the regulation of data transparency enabled the use of nationwide routine health care data from the German statutory health insurance system, which insures ∼90% of the German population (15). These changes have allowed for new possibilities for estimating the burden of diabetes in Germany. Hence, this study estimates the number of excess deaths due to diabetes (ICD-10 codes E10–E14) and type 2 diabetes (ICD-10 code E11) in Germany, which is the number of deaths that could have been prevented if the diabetes mortality rate was as high as that of the population without diabetes.”

“Nationwide data on mortality ratios for diabetes and no diabetes are not available for Germany. […] the age- and sex-specific mortality rate ratios between people with diabetes and without diabetes were used from a Danish study wherein the Danish National Diabetes Register was linked to the individual mortality data from the Civil Registration System that includes all people residing in Denmark (5). Because the Danish National Diabetes Register is one of the most accurate diabetes registries in Europe, with a sensitivity of 86% and positive predictive value of 90% (5), we are convinced that the Danish estimates are highly valid and reliable. Denmark and Germany have a comparable standard of living and health care system. The diabetes prevalence in these countries is similar (Denmark 7.2%, Germany 7.4% [20]) and mortality of people with and without diabetes comparable, as shown in the European mortality database”

“In total, 174,627 excess deaths (137,950 from type 2 diabetes) could have been prevented in 2010 if mortality was the same in people with and without diabetes. Overall, 21% of all deaths in Germany were attributable to diabetes, and 16% were attributable to type 2 diabetes […] Most of the excess deaths occurred in the 70- to 79- and 80- to 89-year-old age-groups (∼34% each) […]. Substantial sex differences were found in diabetes-related excess deaths. From the age of ∼40 years, the number of male excess deaths due to diabetes started to grow, but the number of female excess deaths increased with a delay. Thus, the highest number of male excess deaths due to diabetes occurred at the age of ∼75 years, whereas the peak of female excess deaths was ∼10 years later. […] The diabetes mortality rates increased with age and were always higher than in the population without diabetes. The largest differences in mortality rates between people with and without diabetes were observed in the younger age-groups. […] These results are in accordance with previous studies worldwide (3,4,7,9) and regional studies in Germany (1113).”

“According to official numbers from the Federal Statistical Office, 858,768 people died in Germany in 2010, with 23,131 deaths due to diabetes, representing 2.7% of the all-cause mortality (26). Hence, in Germany, diabetes is not ranked among the top 10 most common causes of death […]. We found that 21% of all deaths were attributable to diabetes and 16% were attributable to type 2 diabetes; hence, we suggest that the number of excess deaths attributable to diabetes is strongly underestimated if we rely on reported causes of death from death certificates, as official statistics do. Estimating diabetes-related mortality is challenging because most people die as a result of diabetes complications and comorbidities, such as cardiovascular disease and renal failure, which often are reported as the underlying cause of death (1,23). For this reason, another approach is to focus not only on the underlying cause of death but also on the multiple causes of death to assess any mention of a disease on the death certificate (27). In a study from Italy, the method of assessing multiple causes of death revealed that in 12.3% of all studied death certificates, diabetes was mentioned, whereas only 2.9% reported diabetes as the underlying cause of death (27), corresponding to a four times higher proportion of death related to diabetes. Another nationwide analysis from Canada found that diabetes was more than twice as likely to be a contributing factor to death than the underlying cause of death from the years 2004–2008 (28). A recently published study from the U.S. that was based on two representative surveys from 1997 to 2010 found that 11.5% of all deaths were attributable to diabetes, which reflects a three to four times higher proportion of diabetes-related deaths (29). Overall, these results, together with the current calculations, demonstrate that deaths due to diabetes contribute to a much higher burden than previously assumed.”

ii. Standardizing Clinically Meaningful Outcome Measures Beyond HbA1c for Type 1 Diabetes: A Consensus Report of the American Association of Clinical Endocrinologists, the American Association of Diabetes Educators, the American Diabetes Association, the Endocrine Society, JDRF International, The Leona M. and Harry B. Helmsley Charitable Trust, the Pediatric Endocrine Society, and the T1D Exchange.

“Type 1 diabetes is a life-threatening, autoimmune disease that strikes children and adults and can be fatal. People with type 1 diabetes have to test their blood glucose multiple times each day and dose insulin via injections or an infusion pump 24 h a day every day. Too much insulin can result in hypoglycemia, seizures, coma, or death. Hyperglycemia over time leads to kidney, heart, nerve, and eye damage. Even with diligent monitoring, the majority of people with type 1 diabetes do not achieve recommended target glucose levels. In the U.S., approximately one in five children and one in three adults meet hemoglobin A1c (HbA1c) targets and the average patient spends 7 h a day hyperglycemic and over 90 min hypoglycemic (13). […] HbA1c is a well-accepted surrogate outcome measure for evaluating the efficacy of diabetes therapies and technologies in clinical practice as well as in research (46). […] While HbA1c is used as a primary outcome to assess glycemic control and as a surrogate for risk of developing complications, it has limitations. As a measure of mean blood glucose over 2 or 3 months, HbA1c does not capture short-term variations in blood glucose or exposure to hypoglycemia and hyperglycemia in individuals with type 1 diabetes; HbA1c also does not capture the impact of blood glucose variations on individuals’ quality of life. Recent advances in type 1 diabetes technologies have made it feasible to assess the efficacy of therapies and technologies using a set of outcomes beyond HbA1c and to expand definitions of outcomes such as hypoglycemia. While definitions for hypoglycemia in clinical care exist, they have not been standardized […]. The lack of standard definitions impedes and can confuse their use in clinical practice, impedes development processes for new therapies, makes comparison of studies in the literature challenging, and may lead to regulatory and reimbursement decisions that fail to meet the needs of people with diabetes. To address this vital issue, the type 1 diabetes–stakeholder community launched the Type 1 Diabetes Outcomes Program to develop consensus definitions for a set of priority outcomes for type 1 diabetes. […] The outcomes prioritized under the program include hypoglycemia, hyperglycemia, time in range, diabetic ketoacidosis (DKA), and patient-reported outcomes (PROs).”

“Hypoglycemia is a significant — and potentially fatal — complication of type 1 diabetes management and has been found to be a barrier to achieving glycemic goals (9). Repeated exposure to severe hypoglycemic events has been associated with an increased risk of cardiovascular events and all-cause mortality in people with type 1 or type 2 diabetes (10,11). Hypoglycemia can also be fatal, and severe hypoglycemic events have been associated with increased mortality (1214). In addition to the physical aspects of hypoglycemia, it can also have negative consequences on emotional status and quality of life.

While there is some variability in how and when individuals manifest symptoms of hypoglycemia, beginning at blood glucose levels <70 mg/dL (3.9 mmol/L) (which is at the low end of the typical post-absorptive plasma glucose range), the body begins to increase its secretion of counterregulatory hormones including glucagon, epinephrine, cortisol, and growth hormone. The release of these hormones can cause moderate autonomic effects, including but not limited to shaking, palpitations, sweating, and hunger (15). Individuals without diabetes do not typically experience dangerously low blood glucose levels because of counterregulatory hormonal regulation of glycemia (16). However, in individuals with type 1 diabetes, there is often a deficiency of the counterregulatory response […]. Moreover, as people with diabetes experience an increased number of episodes of hypoglycemia, the risk of hypoglycemia unawareness, impaired glucose counterregulation (for example, in hypoglycemia-associated autonomic failure [17]), and level 2 and level 3 hypoglycemia […] all increase (18). Therefore, it is important to recognize and treat all hypoglycemic events in people with type 1 diabetes, particularly in populations (children, the elderly) that may not have the ability to recognize and self-treat hypoglycemia. […] More notable clinical symptoms begin at blood glucose levels <54 mg/dL (3.0 mmol/L) (19,20). As the body’s primary utilizer of glucose, the brain is particularly sensitive to decreases in blood glucose concentrations. Both experimental and clinical evidence has shown that, at these levels, neurogenic and neuroglycopenic symptoms including impairments in reaction times, information processing, psychomotor function, and executive function begin to emerge. These neurological symptoms correlate to altered brain activity in multiple brain areas including the prefrontal cortex and medial temporal lobe (2124). At these levels, individuals may experience confusion, dizziness, blurred or double vision, tremors, and tingling sensations (25). Hypoglycemia at this glycemic level may also increase proinflammatory and prothrombotic markers (26). Left untreated, these symptoms can become severe to the point that an individual will require assistance from others to move or function. Prolonged untreated hypoglycemia that continues to drop below 50 mg/dL (2.8 mmol/L) increases the risk of seizures, coma, and death (27,28). Hypoglycemia that affects cognition and stamina may also increase the risk of accidents and falls, which is a particular concern for older adults with diabetes (29,30).

The glycemic thresholds at which these symptoms occur, as well as the severity with which they manifest themselves, may vary in individuals with type 1 diabetes depending on the number of hypoglycemic episodes they have experienced (3133). Counterregulatory physiological responses may evolve in patients with type 1 diabetes who endure repeated hypoglycemia over time (34,35).”

“The Steering Committee defined three levels of hypoglycemia […] Level 1 hypoglycemia is defined as a measurable glucose concentration <70 mg/dL (3.9 mmol/L) but ≥54 mg/dL (3.0 mmol/L) that can alert a person to take action. A blood glucose concentration of 70 mg/dL (3.9 mmol/L) has been recognized as a marker of physiological hypoglycemia in humans, as it approximates the glycemic threshold for neuroendocrine responses to falling glucose levels in individuals without diabetes. As such, blood glucose in individuals without diabetes is generally 70–100 mg/dL (3.9–5.6 mmol/L) upon waking and 70–140 mg/dL (3.9–7.8 mmol/L) after meals, and any excursions beyond those levels are typically countered with physiological controls (16,37). However, individuals with diabetes who have impaired or altered counterregulatory hormonal and neurological responses do not have the same internal regulation as individuals without diabetes to avoid dropping below 70 mg/dL (3.9 mmol/L) and becoming hypoglycemic. Recurrent episodes of hypoglycemia lead to increased hypoglycemia unawareness, which can become dangerous as individuals cease to experience symptoms of hypoglycemia, allowing their blood glucose levels to continue falling. Therefore, glucose levels <70 mg/dL (3.9 mmol/L) are clinically important, independent of the severity of acute symptoms.

Level 2 hypoglycemia is defined as a measurable glucose concentration <54 mg/dL (3.0 mmol/L) that needs immediate action. At ∼54 mg/dL (3.0 mmol/L), neurogenic and neuroglycopenic hypoglycemic symptoms begin to occur, ultimately leading to brain dysfunction at levels <50 mg/dL (2.8 mmol/L) (19,20). […] Level 3 hypoglycemia is defined as a severe event characterized by altered mental and/or physical status requiring assistance. Severe hypoglycemia captures events during which the symptoms associated with hypoglycemia impact a patient to such a degree that the patient requires assistance from others (27,28). […] Hypoglycemia that sets in relatively rapidly, such as in the case of a significant insulin overdose, may induce level 2 or level 3 hypoglycemia with little warning (38).”

“The data regarding the effects of chronic hyperglycemia on long-term outcomes is conclusive, indicating that chronic hyperglycemia is a major contributor to morbidity and mortality in type 1 diabetes (41,4345). […] Although the correlation between long-term poor glucose control and type 1 diabetes complications is well established, the impact of short-term hyperglycemia is not as well understood. However, hyperglycemia has been shown to have physiological effects and in an acute-care setting is linked to morbidity and mortality in people with and without diabetes. Short-term hyperglycemia, regardless of diabetes diagnosis, has been shown to reduce survival rates among patients admitted to the hospital with stroke or myocardial infarction (47,48). In addition to increasing mortality, short-term hyperglycemia is correlated with stroke severity and poststroke disability (49,50).

The effects of short-term hyperglycemia have also been observed in nonacute settings. Evidence indicates that hyperglycemia alters retinal cell firing through sensitization in patients with type 1 diabetes (51). This finding is consistent with similar findings showing increased oxygen consumption and blood flow in the retina during hyperglycemia. Because retinal cells absorb glucose through an insulin-independent process, they respond more strongly to increases in glucose in the blood than other cells in patients with type 1 diabetes. The effects of acute hyperglycemia on retinal response may underlie part of the development of retinopathy known to be a long-term complication of type 1 diabetes.”

“The Steering Committee defines hyperglycemia for individuals with type 1 diabetes as the following:

  • Level 1—elevated glucose: glucose >180 mg/dL (10 mmol/L) and glucose ≤250 mg/dL (13.9 mmol/L)

  • Level 2—very elevated glucose: glucose >250 mg/dL (13.9 mmol/L) […]

Elevated glucose is defined as a glucose concentration >180 mg/dL (10.0 mmol/L) but ≤250 mg/dL (13.9 mmol/L). In clinical practice, measures of hyperglycemia differ based on time of day (e.g., pre- vs. postmeal). This program, however, focused on defining outcomes for use in product development that are universally applicable. Glucose profiles and postprandial blood glucose data for individuals without diabetes suggest that 140 mg/dL (7.8 mmol/L) is the appropriate threshold for defining hyperglycemia. However, data demonstrate that the majority of individuals without diabetes exceed this threshold every day. Moreover, people with diabetes spend >60% of their day above this threshold, which suggests that 140 mg/dL (7.8 mmol/L) is too low of a threshold for measuring hyperglycemia in individuals with diabetes. Current clinical guidelines for people with diabetes indicate that peak prandial glucose should not exceed 180 mg/dL (10.0 mmol/L). As such, the Steering Committee identified 180 mg/dL (10.0 mmol/L) as the initial threshold defining elevated glucose. […]

Very elevated glucose is defined as a glucose concentration >250 mg/dL (13.9 mmol/L). Evidence examining the impact of hyperglycemia does not examine the incremental effects of increasing blood glucose. However, blood glucose values exceeding 250 mg/dL (13.9 mmol/L) increase the risk for DKA (58), and HbA1c readings at that level have been associated with a high likelihood of complications.”

“An individual whose blood glucose levels rarely extend beyond the thresholds defined for hypo- and hyperglycemia is less likely to be subject to the short-term or long-term effects experienced by those with frequent excursions beyond one or both thresholds. It is also evident that if the intent of a given intervention is to safely manage blood glucose but the intervention does not reliably maintain blood glucose within safe levels, then the intervention should not be considered effective.

The time in range outcome is distinguished from traditional HbA1c testing in several ways (4,59). Time in range captures fluctuations in glucose levels continuously, whereas HbA1c testing is done at static points in time, usually months apart (60). Furthermore, time in range is more specific and sensitive than traditional HbA1c testing; for example, a treatment that addresses acute instances of hypo- or hyperglycemia may be detected in a time in range assessment but not necessarily in an HbA1c assessment. As a percentage, time in range is also more likely to be comparable across patients than HbA1c values, which are more likely to have patient-specific variations in significance (61). Finally, time in range may be more likely than HbA1c levels to correlate with PROs, such as quality of life, because the outcome is more representative of the whole patient experience (62). Table 3 illustrates how the concept of time in range differs from current HbA1c testing. […] [V]ariation in what is considered “normal” glucose fluctuations across populations, as well as what is realistically achievable for people with type 1 diabetes, must be taken into account so as not to make the target range definition too restrictive.”

“The Steering Committee defines time in range for individuals with type 1 diabetes as the following:

  • Percentage of readings in the range of 70–180 mg/dL (3.9–10.0 mmol/L) per unit of time

The Steering Committee considered it important to keep the time in range definition wide in order to accommodate variations across the population with type 1 diabetes — including different age-groups — but limited enough to preclude the possibility of negative outcomes. The upper and lower bounds of the time in range definition are consistent with the definitions for hypo- and hyperglycemia defined above. For individuals without type 1 diabetes, 70–140 mg/dL (3.9–7.8 mmol/L) represents a normal glycemic range (66). However, spending most of the day in this range is not generally achievable for people with type 1 diabetes […] To date, there is limited research correlating time in range with positive short-term and long-term type 1 diabetes outcomes, as opposed to the extensive research demonstrating the negative consequences of excursions into hyper- or hypoglycemia. More substantial evidence demonstrating a correlation or a direct causative relationship between time in range for patients with type 1 diabetes and positive health outcomes is needed.”

“DKA is often associated with hyperglycemia. In most cases, in an individual with diabetes, the cause of hyperglycemia is also the cause of DKA, although the two conditions are distinct. DKA develops when a lack of glucose in cells prompts the body to begin breaking down fatty acid reserves. This increases the levels of ketones in the body (ketosis) and causes a drop in blood pH (acidosis). At its most severe, DKA can cause cerebral edema, acute respiratory distress, thromboembolism, coma, and death (69,70). […] Although the current definition for DKA includes a list of multiple criteria that must be met, not all information currently included in the accepted definition is consistently gathered or required to diagnose DKA. The Steering Committee defines DKA in individuals with type 1 diabetes in a clinical setting as the following:

  • Elevated serum or urine ketones (greater than the upper limit of the normal range), and

  • Serum bicarbonate <15 mmol/L or blood pH <7.3

Given the seriousness of DKA, it is unnecessary to stratify DKA into different levels or categories, as the presence of DKA—regardless of the differences observed in the separate biochemical tests—should always be considered serious. In individuals with known diabetes, plasma glucose values are not necessary to diagnose DKA. Further, new therapeutic agents, specifically sodium–glucose cotransporter 2 inhibitors, have been linked to euglycemic DKA, or DKA with blood glucose values <250 mg/dL (13.9 mmol/L).”

“In guidance released in 2009 (72), the U.S. Food and Drug Administration (FDA) defined PROs as “any report of the status of a patient’s health condition that comes directly from the patient, without interpretation of the patient’s response by a clinician or anyone else.” In the same document, the FDA clearly acknowledged the importance of PROs, advising that they be used to gather information that is “best known by the patient or best measured from the patient perspective.”

Measuring and using PROs is increasingly seen as essential to evaluating care from a patient-centered perspective […] Given that type 1 diabetes is a chronic condition primarily treated on an outpatient basis, much of what people with type 1 diabetes experience is not captured through standard clinical measurement. Measures that capture PROs can fill these important information gaps. […] The use of validated PROs in type 1 diabetes clinical research is not currently widespread, and challenges to effectively measuring some PROs, such as quality of life, continue to confront researchers and developers.”

February 20, 2018 Posted by | Cardiology, Diabetes, Medicine, Neurology, Ophthalmology, Studies | Leave a comment

Some things you need to know about machine learning but didn’t know whom to ask (the grad school version)

Some links to stuff related to the lecture’s coverage:
An overview of gradient descent optimization algorithms.
Rectifier (neural networks) [Relu].
Backpropagation.
Escaping From Saddle Points – Online Stochastic Gradient for Tensor Decomposition (Ge et al.).
How to Escape Saddle Points Efficiently (closely related to the paper above, presumably one of the ‘recent improvements’ mentioned in the lecture).
Linear classifier.
Concentration inequality.
A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks (Neyshabur et al.).
Off the convex path (the lecturer’s blog).

February 19, 2018 Posted by | Computer science, Lectures, Mathematics | Leave a comment

Prevention of Late-Life Depression (II)

Some more observations from the book:

In contrast to depression in childhood and youth when genetic and developmental vulnerabilities play a significant role in the development of depression, the development of late-life depression is largely attributed to its interactions with acquired factors, especially medical illness [17, 18]. An analysis of the WHO World Health Survey indicated that the prevalence of depression among medical patients ranged from 9.3 to 23.0 %, significantly higher than that in individuals without medical conditions [19]. Wells et al. [20] found in the Epidemiologic Catchment Area Study that the risk of developing lifetime psychiatric disorders among individuals with at least one medical condition was 27.9 % higher than among those without medical conditions. […] Depression and disability mutually reinforce the risk of each other, and adversely affect disease progression and prognosis [21, 25]. […] disability caused by medical conditions serves as a risk factor for depression [26]. When people lose their normal sensory, motor, cognitive, social, or executive functions, especially in a short period of time, they can become very frustrated or depressed. Inability to perform daily tasks as before decreases self-esteem, reduces independence, increases the level of psychological stress, and creates a sense of hopelessness. On the other hand, depression increases the risk for disability. Negative interpretation, attention bias, and learned hopelessness of depressed persons may increase risky health behaviors that exacerbate physical disorders or disability. Meanwhile, depression-related cognitive impairment also affects role performance and leads to functional disability [25]. For example, Egede [27] found in the 1999 National Health Interview Survey that the risk of having functional disability among patients with the comorbidity of diabetes and depression were approximately 2.5–5 times higher than those with either depression or diabetes alone. […]  A leading cause of disability among medical patients is pain and pain-related fears […] Although a large proportion of pain complaints can be attributed to physiological changes from physical disorders, psychological factors (e.g., attention, interpretation, and coping skills) play an important role in perception of pain […] Bair et al. [31] indicated in a literature review that the prevalence of pain was higher among depressed patients than non-depressed patients, and the prevalence of major depression was also higher among pain patients comparing to those without pain complaints.”

Alcohol use has more serious adverse health effects on older adults than other age groups, since aging-related physiological changes (e.g. reduced liver detoxification and renal clearance) affect alcohol metabolism, increase the blood concentration of alcohol, and magnify negative consequences. More importantly, alcohol interacts with a variety of frequently prescribed medications potentially influencing both treatment and adverse effects. […] Due to age-related changes in pharmacokinetics and pharmacodynamics, older adults are a vulnerable population to […] adverse drug effects. […] Adverse drug events are frequently due to failure to adjust dosage or to account for drug–drug interactions in older adults [64]. […] Loneliness […] is considered as an independent risk factor for depression [46, 47], and has been demonstrated to be associated with low physical activity, increased cardiovascular risks, hyperactivity of the hypothalamic-pituitary-adrenal axis, and activation of immune response [for details, see Cacioppo & Patrick’s book on these topics – US] […] Hopelessness is a key concept of major depression [54], and also an independent risk factor of suicidal ideation […] Hopelessness reduces expectations for the future, and negatively affects judgment for making medical and behavioral decisions, including non-adherence to medical regimens or engaging in unhealthy behaviors.”

Co-occurring depression and medical conditions are associated with more functional impairment and mortality than expected from the severity of the medical condition alone. For example, depression accompanying diabetes confers increased functional impairment [27], complications of diabetes [65, 66], and mortality [6771]. Frasure-Smith and colleagues highlighted the prognostic importance of depression among persons who had sustained a myocardial infarction (MI), finding that depression was a significant predictor of mortality at both 6 and 18 months post MI [72, 73]. Subsequent follow-up studies have borne out the increased risk conferred by depression on the mortality of patients with cardiovascular disease [10, 74, 75]. Over the course of a 2-year follow-up interval, depression contributed as much to mortality as did myocardial infarction or diabetes, with the population attributable fraction of mortality due to depression approximately 13 % (similar to the attributable risk associated with heart attack at 11 % and diabetes at 9 %) [76]. […] Although the bidirectional relationship between physical disorders and depression has been well known, there are still relatively few randomized controlled trials on preventing depression among medically ill patients. […] Rates of attrition [in post-stroke depression prevention trials has been observed to be] high […] Stroke, acute coronary syndrome, cancer, and other conditions impose a variety of treatment burdens on patients so that additional interventions without direct or immediate clinical effects may not be acceptable [95]. So even with good participation rates, lack of adherence to the intervention might limit effects.”

Late-life depression (LLD) is a heterogeneous disease, with multiple risk factors, etiologies, and clinical features. It has been recognized for many years that there is a significant relationship between the presence of depression and cerebrovascular disease in older adults [1, 2]. This subtype of LLD was eventually termed “vascular depression.” […] There have been a multitude of studies associating white matter abnormalities with depression in older adults using MRI technology to visualize lesions, or what appear as hyperintensities in the white matter on T2-weighted scans. A systematic review concluded that white matter hyperintensities (WMH) are more common and severe among older adults with depression compared to their non-depressed peers [9]. […] WMHs are associated with older age [13] and cerebrovascular risk factors, including diabetes, heart disease, and hypertension [14–17]. White matter severity and extent of WMH volume has been related to the severity of depression in late life [18, 19]. For example, among 639 older, community-dwelling adults, white matter lesion (WML) severity was found to predict depressive episodes and symptoms over a 3-year period [19]. […] Another way of investigating white matter integrity is with diffusion tensor imaging (DTI), which measures the diffusion of water in tissues and allows for indirect evidence of the microstructure of white matter, most commonly represented as fractional anisotropy (FA) and mean diffusivity (MD). DTI may be more sensitive to white matter pathology than is quantification of WMH […] A number of studies have found lower FA in widespread regions among individuals with LLD relative to controls [34, 36, 37]. […] lower FA has been associated with poorer performance on measures of cognitive functioning among patients with LLD [35, 38–40] and with measures of cerebrovascular risk severity. […] It is important to recognize that FA reflects the organization of fiber tracts, including fiber density, axonal diameter, or myelination in white matter. Thus, lower FA can result from multiple pathophysiological sources [42, 43]. […] Together, the aforementioned studies provide support for the vascular depression hypothesis. They demonstrate that white matter integrity is reduced in patients with LLD relative to controls, is somewhat specific to regions important for cognitive and emotional functioning, and is associated with cognitive functioning and depression severity. […] There is now a wealth of evidence to support the association between vascular pathology and depression in older age. While the etiology of depression in older age is multifactorial, from the epidemiological, neuroimaging, behavioral, and genetic evidence available, we can conclude that vascular depression represents one important subtype of LLD. The mechanisms underlying the relationship between vascular pathology and depression are likely multifactorial, and may include disrupted connections between key neural regions, reduced perfusion of blood to key brain regions integral to affective and cognitive processing, and inflammatory processes.”

Cognitive changes associated with depression have been the focus of research for decades. Results have been inconsistent, likely as a result of methodological differences in how depression is diagnosed and cognitive functioning measured, as well as the effects of potential subtypes and the severity of depression […], though deficits in executive functioning, learning and memory, and attention have been associated with depression in most studies [75]. In older adults, additional confounding factors include the potential presence of primary degenerative disorders, such as Alzheimer’s disease, which can pose a challenge to differential diagnosis in its early stages. […] LLD with cognitive dysfunction has been shown to result in greater disability than depressive symptoms alone [6], and MCI [mild cognitive impairment, US] with co-occurring LLD has been shown to double the risk of developing Alzheimer’s disease (AD) compared to MCI alone [86]. The conversion from MCI to AD also appears to occur earlier in patients with cooccurring depressive symptoms, as demonstrated by Modrego & Ferrandez [86] in their prospective cohort study of 114 outpatients diagnosed with amnestic MCI. […] Given accruing evidence for abnormal functioning of a number of cortical and subcortical networks in geriatric depression, of particular interest is whether these abnormalities are a reflection of the actively depressed state, or whether they may persist following successful resolution of symptoms. To date, studies have investigated this question through either longitudinal investigation of adults with geriatric depression, or comparison of depressed elders who are actively depressed versus those who have achieved symptom remission. Of encouragement, successful treatment has been reliably associated with normalization of some aspects of disrupted network functioning. For example, successful antidepressant treatment is associated with reduction of the elevated cerebral glucose metabolism observed during depressed states (e.g., [71–74]), with greater symptom reduction associated with greater metabolic change […] Taken together, these studies suggest that although a subset of the functional abnormalities observed during the LLD state may resolve with successful treatment, other abnormalities persist and may be tied to damage to the structural connectivity in important affective and cognitive networks. […] studies suggest a chronic decrement in cognitive functioning associated with LLD that is not adequately addressed through improvement of depressive symptoms alone.”

A review of the literature on evidence-based treatments for LLD found that about 50 % of patients improved on antidepressants, but that the number needed to treat (NNT) was quite high (NNT = 8, [139]) and placebo effects were significant [140]. Additionally, no difference was demonstrated in the effectiveness of one antidepressant drug class over another […], and in one-third of patients, depression was resistant to monotherapy [140]. The addition of medications or switching within or between drug classes appears to result in improved treatment response for these patients [140, 141]. A meta-analysis of patient-level variables demonstrated that duration of depressive symptoms and baseline depression severity significantly predicts response to antidepressant treatment in LLD, with chronically depressed older patients with moderate-to-severe symptoms at baseline experiencing more improvement in symptoms than mildly and acutely depressed patients [142]. Pharmacological treatment response appears to range from incomplete to poor in LLD with co-occurring cognitive impairment.”

“[C]ompared to other formulations of prevention, such as primary, secondary, or tertiary — in which interventions are targeted at the level of disease/stage of disease — the IOM conceptual framework involves interventions that are targeted at the level of risk in the population [2]. […] [S]elective prevention studies have an important “numbers” advantage — similar to that of indicated prevention trials: the relatively high incidence of depression among persons with key risk markers enables investigator to test interventions with strong statistical power, even with somewhat modest sample sizes. This fact was illustrated by Schoevers and colleagues [3], in which the authors were able to account for nearly 50 % of total risk of late-life depression with consideration of only a handful of factors. Indeed, research, largely generated by groups in the Netherlands and the USA, has identified that selective prevention may be one of the most efficient approaches to late-life depression prevention, as they have estimated that targeting persons at high risk for depression — based on risk markers such as medical comorbidity, low social support, or physical/functional disability — can yield theoretical numbers needed to treat (NNTs) of approximately 5–7 in primary care settings [4–7]. […] compared to the findings from selective prevention trials targeting older persons with general health/medical problems, […] trials targeting older persons based on sociodemographic risk factors have been more mixed and did not reveal as consistent a pattern of benefits for selective prevention of depression.”

Few of the studies in the existing literature that involve interventions to prevent depression and/or reduce depressive symptoms in older populations have included economic evaluations [13]. The identification of cost-effective interventions to provide to groups at high risk for depression is an important public health goal, as such treatments may avert or reduce a significant amount of the disease burden. […] A study by Katon and colleagues [8] showed that elderly patients with either subsyndromal or major depression had significantly higher medical costs during the previous 6 months than those without depression; total healthcare costs were $1,045 to $1,700 greater, and total outpatient/ambulatory costs ranged from being $763 to $979 more, on average. Depressed patients had greater usage of health resources in every category of care examined, including those that are not mental health-related, such as emergency department visits. No difference in excess costs was found between patients with a DSM-IV depressive disorder and those with depressive symptoms only, however, as mean total costs were 51 % higher in the subthreshold depression group (95 % CI = 1.39–1.66) and 49 % higher in the MDD/dysthymia group (95 % CI = 1.28–1.72) than in the nondepressed group [8]. In a similar study, the usage of various types of health services by primary care patients in the Netherlands was assessed, and average costs were determined to be 1,403 more in depressed individuals versus control patients [21]. Study investigators once again observed that patients with depression had greater utilization of both non-mental and mental healthcare services than controls.”

“In order for routine depression screening in the elderly to be cost-effective […] appropriate follow-up measures must be taken with those who screen positive, including a diagnostic interview and/or referral to a mental health professional [this – the necessity/requirement of proper follow-up following screens in order for screening to be cost-effective – is incidentally a standard result in screening contexts, see also Juth & Munthe’s book – US] [23, 25]. For example, subsequent steps may include initiation of psychotherapy or antidepressant treatment. Thus, one reason that the USPSTF does not recommend screening for depression in settings where proper mental health resources do not exist is that the evidence suggests that outcomes are unlikely to improve without effective follow-up care […]  as per the USPSTF suggestion, Medicare will only cover the screening when the appropriate supports for proper diagnosis and treatment are available […] In order to determine which interventions to prevent and treat depression should be provided to those who screen positive for depressive symptoms and to high-risk populations in general, cost-effectiveness analyses must be completed for a variety of different treatments and preventive measures. […] questions remain regarding whether annual versus other intervals of screening are most cost-effective. With respect to preventive interventions, the evidence to date suggests that these are cost-effective in settings where those at the highest risk are targeted.”

February 19, 2018 Posted by | Books, Cardiology, Diabetes, Health Economics, Neurology, Pharmacology, Psychiatry, Psychology | Leave a comment

Systems Biology (III)

Some observations from chapter 4 below:

The need to maintain a steady state ensuring homeostasis is an essential concern in nature while negative feedback loop is the fundamental way to ensure that this goal is met. The regulatory system determines the interdependences between individual cells and the organism, subordinating the former to the latter. In trying to maintain homeostasis, the organism may temporarily upset the steady state conditions of its component cells, forcing them to perform work for the benefit of the organism. […] On a cellular level signals are usually transmitted via changes in concentrations of reaction substrates and products. This simple mechanism is made possible due to limited volume of each cell. Such signaling plays a key role in maintaining homeostasis and ensuring cellular activity. On the level of the organism signal transmission is performed by hormones and the nervous system. […] Most intracellular signal pathways work by altering the concentrations of selected substances inside the cell. Signals are registered by forming reversible complexes consisting of a ligand (reaction product) and an allosteric receptor complex. When coupled to the ligand, the receptor inhibits the activity of its corresponding effector, which in turn shuts down the production of the controlled substance ensuring the steady state of the system. Signals coming from outside the cell are usually treated as commands (covalent modifications), forcing the cell to adjust its internal processes […] Such commands can arrive in the form of hormones, produced by the organism to coordinate specialized cell functions in support of general homeostasis (in the organism). These signals act upon cell receptors and are usually amplified before they reach their final destination (the effector).”

“Each concentration-mediated signal must first be registered by a detector. […] Intracellular detectors are typically based on allosteric proteins. Allosteric proteins exhibit a special property: they have two stable structural conformations and can shift from one form to the other as a result of changes in ligand concentrations. […] The concentration of a product (or substrate) which triggers structural realignment in the allosteric protein (such as a regulatory enzyme) depends on the genetically-determined affinity of the active site to its ligand. Low affinity results in high target concentration of the controlled substance while high affinity translates into lower concentration […]. In other words, high concentration of the product is necessary to trigger a low-affinity receptor (and vice versa). Most intracellular regulatory mechanisms rely on noncovalent interactions. Covalent bonding is usually associated with extracellular signals, generated by the organism and capable of overriding the cell’s own regulatory mechanisms by modifying the sensitivity of receptors […]. Noncovalent interactions may be compared to requests while covalent signals are treated as commands. Signals which do not originate in the receptor’s own feedback loop but modify its affinity are known as steering signals […] Hormones which act upon cells are, by their nature, steering signals […] Noncovalent interactions — dependent on substance concentrations — impose spatial restrictions on regulatory mechanisms. Any increase in cell volume requires synthesis of additional products in order to maintain stable concentrations. The volume of a spherical cell is given as V = 4/3 π r3, where r indicates cell radius. Clearly, even a slight increase in r translates into a significant increase in cell volume, diluting any products dispersed in the cytoplasm. This implies that cells cannot expand without incurring great energy costs. It should also be noted that cell expansion reduces the efficiency of intracellular regulatory mechanisms because signals and substrates need to be transported over longer distances. Thus, cells are universally small, regardless of whether they make up a mouse or an elephant.”

An effector is an element of a regulatory loop which counteracts changes in the regulated quantity […] Synthesis and degradation of biological compounds often involves numerous enzymes acting in sequence. The product of one enzyme is a substrate for another enzyme. With the exception of the initial enzyme, each step of this cascade is controlled by the availability of the supplied substrate […] The effector consists of a chain of enzymes, each of which depends on the activity of the initial regulatory enzyme […] as well as on the activity of its immediate predecessor which supplies it with substrates. The function of all enzymes in the effector chain is indirectly dependent on the initial enzyme […]. This coupling between the receptor and the first link in the effector chain is a universal phenomenon. It can therefore be said that the initial enzyme in the effector chain is, in fact, a regulatory enzyme. […] Most cell functions depend on enzymatic activity. […] It seems that a set of enzymes associated with a specific process which involves a negative feedback loop is the most typical form of an intracellular regulatory effector. Such effectors can be controlled through activation or inhibition of their associated enzymes.”

“The organism is a self-contained unit represented by automatic regulatory loops which ensure homeostasis. […] Effector functions are conducted by cells which are usually grouped and organized into tissues and organs. Signal transmission occurs by way of body fluids, hormones or nerve connections. Cells can be treated as automatic and potentially autonomous elements of regulatory loops, however their specific action is dependent on the commands issued by the organism. This coercive property of organic signals is an integral requirement of coordination, allowing the organism to maintain internal homeostasis. […] Activities of the organism are themselves regulated by their own negative feedback loops. Such regulation differs however from the mechanisms observed in individual cells due to its place in the overall hierarchy and differences in signal properties, including in particular:
• Significantly longer travel distances (compared to intracellular signals);
• The need to maintain hierarchical superiority of the organism;
• The relative autonomy of effector cells. […]
The relatively long distance travelled by organism’s signals and their dilution (compared to intracellular ones) calls for amplification. As a consequence, any errors or random distortions in the original signal may be drastically exacerbated. A solution to this problem comes in the form of encoding, which provides the signal with sufficient specificity while enabling it to be selectively amplified. […] a loudspeaker can […] assist in acoustic communication, but due to the lack of signal encoding it cannot compete with radios in terms of communication distance. The same reasoning applies to organism-originated signals, which is why information regarding blood glucose levels is not conveyed directly by glucose but instead by adrenalin, glucagon or insulin. Information encoding is handled by receptors and hormone-producing cells. Target cells are capable of decoding such signals, thus completing the regulatory loop […] Hormonal signals may be effectively amplified because the hormone itself does not directly participate in the reaction it controls — rather, it serves as an information carrier. […] strong amplification invariably requires encoding in order to render the signal sufficiently specific and unambiguous. […] Unlike organisms, cells usually do not require amplification in their internal regulatory loops — even the somewhat rare instances of intracellular amplification only increase signal levels by a small amount. Without the aid of an amplifier, messengers coming from the organism level would need to be highly concentrated at their source, which would result in decreased efficiency […] Most signals originated on organism’s level travel with body fluids; however if a signal has to reach its destination very rapidly (for instance in muscle control) it is sent via the nervous system”.

“Two types of amplifiers are observed in biological systems:
1. cascade amplifier,
2. positive feedback loop. […]
A cascade amplifier is usually a collection of enzymes which perform their action by activation in strict sequence. This mechanism resembles multistage (sequential) synthesis or degradation processes, however instead of exchanging reaction products, amplifier enzymes communicate by sharing activators or by directly activating one another. Cascade amplifiers are usually contained within cells. They often consist of kinases. […] Amplification effects occurring at each stage of the cascade contribute to its final result. […] While the kinase amplification factor is estimated to be on the order of 103, the phosphorylase cascade results in 1010-fold amplification. It is a stunning value, though it should also be noted that the hormones involved in this cascade produce particularly powerful effects. […] A positive feedback loop is somewhat analogous to a negative feedback loop, however in this case the input and output signals work in the same direction — the receptor upregulates the process instead of inhibiting it. Such upregulation persists until the available resources are exhausted.
Positive feedback loops can only work in the presence of a control mechanism which prevents them from spiraling out of control. They cannot be considered self-contained and only play a supportive role in regulation. […] In biological systems positive feedback loops are sometimes encountered in extracellular regulatory processes where there is a need to activate slowly-migrating components and greatly amplify their action in a short amount of time. Examples include blood coagulation and complement factor activation […] Positive feedback loops are often coupled to negative loop-based control mechanisms. Such interplay of loops may impart the signal with desirable properties, for instance by transforming a flat signals into a sharp spike required to overcome the activation threshold for the next stage in a signalling cascade. An example is the ejection of calcium ions from the endoplasmic reticulum in the phospholipase C cascade, itself subject to a negative feedback loop.”

“Strong signal amplification carries an important drawback: it tends to “overshoot” its target activity level, causing wild fluctuations in the process it controls. […] Nature has evolved several means of signal attenuation. The most typical mechanism superimposes two regulatory loops which affect the same parameter but act in opposite directions. An example is the stabilization of blood glucose levels by two contradictory hormones: glucagon and insulin. Similar strategies are exploited in body temperature control and many other biological processes. […] The coercive properties of signals coming from the organism carry risks associated with the possibility of overloading cells. The regulatory loop of an autonomous cell must therefore include an “off switch”, controlled by the cell. An autonomous cell may protect itself against excessive involvement in processes triggered by external signals (which usually incur significant energy expenses). […] The action of such mechanisms is usually timer-based, meaning that they inactivate signals following a set amount of time. […] The ability to interrupt signals protects cells from exhaustion. Uncontrolled hormone-induced activity may have detrimental effects upon the organism as a whole. This is observed e.g. in the case of the vibrio cholerae toxin which causes prolonged activation of intestinal epithelial cells by locking protein G in its active state (resulting in severe diarrhea which can dehydrate the organism).”

“Biological systems in which information transfer is affected by high entropy of the information source and ambiguity of the signal itself must include discriminatory mechanisms. These mechanisms usually work by eliminating weak signals (which are less specific and therefore introduce ambiguities). They create additional obstacles (thresholds) which the signals must overcome. A good example is the mechanism which eliminates the ability of weak, random antigens to activate lymphatic cells. It works by inhibiting blastic transformation of lymphocytes until a so-called receptor cap has accumulated on the surface of the cell […]. Only under such conditions can the activation signal ultimately reach the cell nucleus […] and initiate gene transcription. […] weak, reversible nonspecific interactions do not permit sufficient aggregation to take place. This phenomenon can be described as a form of discrimination against weak signals. […] Discrimination may also be linked to effector activity. […] Cell division is counterbalanced by programmed cell death. The most typical example of this process is apoptosis […] Each cell is prepared to undergo controlled death if required by the organism, however apoptosis is subject to tight control. Cells protect themselves against accidental triggering of the process via IAP proteins. Only strong proapoptotic signals may overcome this threshold and initiate cellular suicide”.

Simply knowing the sequences, structures or even functions of individual proteins does not provide sufficient insight into the biological machinery of living organisms. The complexity of individual cells and entire organisms calls for functional classification of proteins. This task can be accomplished with a proteome — a theoretical construct where individual elements (proteins) are grouped in a way which acknowledges their mutual interactions and interdependencies, characterizing the information pathways in a complex organism.
Most ongoing proteome construction projects focus on individual proteins as the basic building blocks […] [We would instead argue in favour of a model in which] [t]he basic unit of the proteome is one negative feedback loop (rather than a single protein) […]
Due to the relatively large number of proteins (between 25 and 40 thousand in the human organism), presenting them all on a single graph with vertex lengths corresponds to the relative duration of interactions would be unfeasible. This is why proteomes are often subdivided into functional subgroups such as the metabolome (proteins involved in metabolic processes), interactome (complex-forming proteins), kinomes (proteins which belong to the kinase family) etc.”

February 18, 2018 Posted by | Biology, Books, Chemistry, Genetics, Medicine | Leave a comment

Prevention of Late-Life Depression (I)

Late-life depression is a common and highly disabling condition and is also associated with higher health care utilization and overall costs. The presence of depression may complicate the course and treatment of comorbid major medical conditions that are also highly prevalent among older adults — including diabetes, hypertension, and heart disease. Furthermore, a considerable body of evidence has demonstrated that, for older persons, residual symptoms and functional impairment due to depression are common — even when appropriate depression therapies are being used. Finally, the worldwide phenomenon of a rapidly expanding older adult population means that unprecedented numbers of seniors — and the providers who care for them — will be facing the challenge of late-life depression. For these reasons, effective prevention of late-life depression will be a critical strategy to lower overall burden and cost from this disorder. […] This textbook will illustrate the imperative for preventing late-life depression, introduce a broad range of approaches and key elements involved in achieving effective prevention, and provide detailed examples of applications of late-life depression prevention strategies”.

I gave the book two stars on goodreads. There are 11 chapters in the book, written by 22 different contributors/authors, so of course there’s a lot of variation in the quality of the material included; the two star rating was an overall assessment of the quality of the material, and the last two chapters – but in particular chapter 10 – did a really good job convincing me that the the book did not deserve a 3rd star (if you decide to read the book, I advise you to skip chapter 10). In general I think many of the authors are way too focused on statistical significance and much too hesitant to report actual effect sizes, which are much more interesting. Gender is mentioned repeatedly throughout the coverage as an important variable, to the extent that people who do not read the book carefully might think this is one of the most important variables at play; but when you look at actual effect sizes, you get reported ORs of ~1.4 for this variable, compared to e.g. ORs in the ~8-9 for the bereavement variable (see below). You can quibble about population attributable fraction and so on here, but if the effect size is that small it’s unlikely to be all that useful in terms of directing prevention efforts/resource allocation (especially considering that women make out the majority of the total population in these older age groups anyway, as they have higher life expectancy than their male counterparts).

Anyway, below I’ve added some quotes and observations from the first few chapters of the book.

Meta-analyses of more than 30 randomized trials conducted in the High Income Countries show that the incidence of new depressive and anxiety disorders can be reduced by 25–50 % over 1–2 years, compared to usual care, through the use of learning-based psychotherapies (such as interpersonal psychotherapy, cognitive behavioral therapy, and problem solving therapy) […] The case for depression prevention is compelling and represents the key rationale for this volume: (1) Major depression is both prevalent and disabling, typically running a relapsing or chronic course. […] (2) Major depression is often comorbid with other chronic conditions like diabetes, amplifying the disability associated with these conditions and worsening family caregiver burden. (3) Depression is associated with worse physical health outcomes, partly mediated through poor treatment adherence, and it is associated with excess mortality after myocardial infarction, stroke, and cancer. It is also the major risk factor for suicide across the life span and particularly in old age. (4) Available treatments are only partially effective in reducing symptom burden, sustaining remission, and averting years lived with disability.”

“[M]any people suffering from depression do not receive any care and approximately a third of those receiving care do not respond to current treatments. The risk of recurrence is high, also in older persons: half of those who have experienced a major depression will experience one or even more recurrences [4]. […] Depression increases the risk at death: among people suffering from depression the risk of dying is 1.65 times higher than among people without a depression [7], with a dose-response relation between severity and duration of depression and the resulting excess mortality [8]. In adults, the average length of a depressive episode is 8 months but among 20 % of people the depression lasts longer than 2 years [9]. […] It has been estimated that in Australia […] 60 % of people with an affective disorder receive treatment, and using guidelines and standards only 34 % receives effective treatment [14]. This translates in preventing 15 % of Years Lived with Disability [15], a measure of disease burden [14] and stresses the need for prevention [16]. Primary health care providers frequently do not recognize depression, in particular among elderly. Older people may present their depressive symptoms differently from younger adults, with more emphasis on physical complaints [17, 18]. Adequate diagnosis of late-life depression can also be hampered by comorbid conditions such as Parkinson and dementia that may have similar symptoms, or by the fact that elderly people as well as care workers may assume that “feeling down” is part of becoming older [17, 18]. […] Many people suffering from depression do not seek professional help or are not identied as depressed [21]. Almost 14 % of elderly people living in community-type living suffer from a severe depression requiring clinical attention [22] and more than 50 % of those have a chronic course [4, 23]. Smit et al. reported an incidence of 6.1 % of chronic or recurrent depression among a sample of 2,200 elderly people (ages 55–85) [21].”

“Prevention differs from intervention and treatment as it is aimed at general population groups who vary in risk level for mental health problems such as late-life depression. The Institute of Medicine (IOM) has introduced a prevention framework, which provides a useful model for comprehending the different objectives of the interventions [29]. The overall goal of prevention programs is reducing risk factors and enhancing protective factors.
The IOM framework distinguishes three types of prevention interventions: (1) universal preventive interventions, (2) selective preventive interventions, and (3) indicated preventive interventions. Universal preventive interventions are targeted at the general audience, regardless of their risk status or the presence of symptoms. Selective preventive interventions serve those sub-populations who have a significantly higher than average risk of a disorder, either imminently or over a lifetime. Indicated preventive interventions target identified individuals with minimal but detectable signs or symptoms suggesting a disorder. This type of prevention consists of early recognition and early intervention of the diseases to prevent deterioration [30]. For each of the three types of interventions, the goal is to reduce the number of new cases. The goal of treatment, on the other hand, is to reduce prevalence or the total number of cases. By reducing incidence you also reduce prevalence [5]. […] prevention research differs from treatment research in various ways. One of the most important differences is the fact that participants in treatment studies already meet the criteria for the illness being studied, such as depression. The intervention is targeted at improvement or remission of the specific condition quicker than if no intervention had taken place. In prevention research, the participants do not meet the specific criteria for the illness being studied and the overall goal of the intervention is to prevent the development of a clinical illness at a lower rate than a comparison group [5].”

A couple of risk factors [for depression] occur more frequently among the elderly than among young adults. The loss of a loved one or the loss of a social role (e.g., employment), decrease of social support and network, and the increasing change of isolation occur more frequently among the elderly. Many elderly also suffer from physical diseases: 64 % of elderly aged 65–74 has a chronic disease [36] […]. It is important to note that depression often co-occurs with other disorders such as physical illness and other mental health problems (comorbidity). Losing a spouse can have significant mental health effects. Almost half of all widows and widowers during the first year after the loss meet the criteria for depression according to the DSM-IV [37]. Depression after loss of a loved one is normal in times of mourning. However, when depressive symptoms persist during a longer period of time it is possible that a depression is developing. Zisook and Shuchter found that a year after the loss of a spouse 16 % of widows and widowers met the criteria of a depression compared to 4 % of those who did not lose their spouse [38]. […] People with a chronic physical disease are also at a higher risk of developing a depression. An estimated 12–36 % of those with a chronic physical illness also suffer from clinical depression [40]. […] around 25 % of cancer patients suffer from depression [40]. […] Depression is relatively common among elderly residing in hospitals and retirement- and nursing homes. An estimated 6–11 % of residents have a depressive illness and among 30 % have depressive symptoms [41]. […] Loneliness is common among the elderly. Among those of 60 years or older, 43 % reported being lonely in a study conducted by Perissinotto et al. […] Loneliness is often associated with physical and mental complaints; apart from depression it also increases the chance of developing dementia and excess mortality [43].”

From the public health perspective it is important to know what the potential health benefits would be if the harmful effect of certain risk factors could be removed. What health benefits would arise from this, at which efforts and costs? To measure this the population attributive fraction (PAF) can be used. The PAF is expressed in a percentage and demonstrates the decrease of the percentage of incidences (number of new cases) when the harmful effects of the targeted risk factors are fully taken away. For public health it would be more effective to design an intervention targeted at a risk factor with a high PAF than a low PAF. […] An intervention needs to be effective in order to be implemented; this means that it has to show a statistically significant difference with placebo or other treatment. Secondly, it needs to be effective; it needs to prove its benefits also in real life (“everyday care”) circumstances. Thirdly, it needs to be efficient. The measure to address this is the Number Needed to Be Treated (NNT). The NNT expresses how many people need to be treated to prevent the onset of one new case with the disorder; the lower the number, the more efficient the intervention [45]. To summarize, an indicated preventative intervention would ideally be targeted at a relatively small group of people with a high, absolute chance of developing the disease, and a risk profile that is responsible for a high PAF. Furthermore, there needs to be an intervention that is both effective and efficient. […] a more detailed and specific description of the target group results in a higher absolute risk, a lower NNT, and also a lower PAF. This is helpful in determining the costs and benefits of interventions aiming at more specific or broader subgroups in the population. […] Unfortunately very large samples are required to demonstrate reductions in universal or selected interventions [46]. […] If the incidence rate is higher in the target population, which is usually the case in selective and even more so in indicated prevention, the number of participants needed to prove an effect is much smaller [5]. This shows that, even though universal interventions may be effective, its effect is harder to prove than that of indicated prevention. […] Indicated and selective preventions appear to be the most successful in preventing depression to date; however, more research needs to be conducted in larger samples to determine which prevention method is really most effective.”

Groffen et al. [6] recently conducted an investigation among a sample of 4,809 participants from the Reykjavik Study (aged 66–93 years). Similar to the findings presented by Vink and colleagues [3], education level was related to depression risk: participants with lower education levels were more likely to report depressed mood in late-life than those with a college education (odds ratio [OR] = 1.87, 95 % confidence interval [CI] = 1.35–2.58). […] Results from a meta-analysis by Lorant and colleagues [8] showed that lower SES individuals had a greater odds of developing depression than those in the highest SES group (OR = 1.24, p= 0.004); however, the studies involved in this review did not focus on older populations. […] Cole and Dendukuri [10] performed a meta-analysis of studies involving middle-aged and older adult community residents, and determined that female gender was a risk factor for depression in this population (Pooled OR = 1.4, 95 % CI = 1.2–1.8), but not old age. Blazer and colleagues [11] found a significant positive association between older age and depressive symptoms in a sample consisting of community-dwelling older adults; however, when potential confounders such as physical disability, cognitive impairment, and gender were included in the analysis, the relationship between chronological age and depressive symptoms was reversed (p< 0.01). A study by Schoevers and colleagues [14] had similar results […] these findings suggest that higher incidence of depression observed among the oldest-old may be explained by other relevant factors. By contrast, the association of female gender with increased risk of late-life depression has been observed to be a highly consistent finding.”

In an examination of marital bereavement, Turvey et al. [16] analyzed data among 5,449 participants aged70 years […] recently bereaved participants had nearly nine times the odds of developing syndromal depression as married participants (OR = 8.8, 95 % CI = 5.1–14.9, p<0.0001), and they also had significantly higher risk of depressive symptoms 2 years after the spousal loss. […] Caregiving burden is well-recognized as a predisposing factor for depression among older adults [18]. Many older persons are coping with physically and emotionally challenging caregiving roles (e.g., caring for a spouse/partner with a serious illness or with cognitive or physical decline). Additionally, many caregivers experience elements of grief, as they mourn the loss of relationship with or the decline of valued attributes of their care recipients. […] Concepts of social isolation have also been examined with regard to late-life depression risk. For example, among 892 participants aged 65 years […], Gureje et al. [13] found that women with a poor social network and rural residential status were more likely to develop major depressive disorder […] Harlow and colleagues [21] assessed the association between social network and depressive symptoms in a study involving both married and recently widowed women between the ages of 65 and 75 years; they found that number of friends at baseline had an inverse association with CES-D (Centers for Epidemiologic Studies Depression Scale) score after 1 month (p< 0.05) and 12 months (p= 0.06) of follow-up. In a study that explicitly addressed the concept of loneliness, Jaremka et al. [22] conducted a study relating this factor to late-life depression; importantly, loneliness has been validated as a distinct construct, distinguishable among older adults from depression. Among 229 participants (mean age = 70 years) in a cohort of older adults caring for a spouse with dementia, loneliness (as measured by the NYU scale) significantly predicted incident depression (p<0.001). Finally, social support has been identified as important to late-life depression risk. For example, Cui and colleagues [23] found that low perceived social support significantly predicted worsening depression status over a 2-year period among 392 primary care patients aged 65 years and above.”

“Saunders and colleagues [26] reported […] findings with alcohol drinking behavior as the predictor. Among 701 community-dwelling adults aged 65 years and above, the authors found a significant association between prior heavy alcohol consumption and late-life depression among men: compared to those who were not heavy drinkers, men with a history of heavy drinking had a nearly fourfold higher odds of being diagnosed with depression (OR = 3.7, 95 % CI = 1.3–10.4, p< 0.05). […] Almeida et al. found that obese men were more likely than non-obese (body mass index [BMI] < 30) men to develop depression (HR = 1.31, 95 % CI = 1.05–1.64). Consistent with these results, presence of the metabolic syndrome was also found to increase risk of incident depression (HR = 2.37, 95 % CI = 1.60–3.51). Finally, leisure-time activities are also important to study with regard to late-life depression risk, as these too are readily modifiable behaviors. For example, Magnil et al. [30] examined such activities among a sample of 302 primary care patients aged 60 years. The authors observed that those who lacked leisure activities had an increased risk of developing depressive symptoms over the 2-year study period (OR = 12, 95 % CI = 1.1–136, p= 0.041). […] an important future direction in addressing social and behavioral risk factors in late-life depression is to make more progress in trials that aim to alter those risk factors that are actually modifiable.”

February 17, 2018 Posted by | Books, Epidemiology, Health Economics, Medicine, Psychiatry, Psychology, Statistics | Leave a comment