Econstudentlog

Big Data (II)

Below I have added a few observation from the last half of the book, as well as some coverage-related links to topics of interest.

“With big data, using correlation creates […] problems. If we consider a massive dataset, algorithms can be written that, when applied, return a large number of spurious correlations that are totally independent of the views, opinions, or hypotheses of any human being. Problems arise with false correlations — for example, divorce rate and margarine consumption […]. [W]hen the number of variables becomes large, the number of spurious correlations also increases. This is one of the main problems associated with trying to extract useful information from big data, because in doing so, as with mining big data, we are usually looking for patterns and correlations. […] one of the reasons Google Flu Trends failed in its predictions was because of these problems. […] The Google Flu Trends project hinged on the known result that there is a high correlation between the number of flu-related online searches and visits to the doctor’s surgery. If a lot of people in a particular area are searching for flu-related information online, it might then be possible to predict the spread of flu cases to adjoining areas. Since the interest is in finding trends, the data can be anonymized and hence no consent from individuals is required. Using their five-year accumulation of data, which they limited to the same time-frame as the CDC data, and so collected only during the flu season, Google counted the weekly occurrence of each of the fifty million most common search queries covering all subjects. These search query counts were then compared with the CDC flu data, and those with the highest correlation were used in the flu trends model. […] The historical data provided a baseline from which to assess current flu activity on the chosen search terms and by comparing the new real-time data against this, a classification on a scale from 1 to 5, where 5 signified the most severe, was established. Used in the 2011–12 and 2012–13 US flu seasons, Google’s big data algorithm famously failed to deliver. After the flu season ended, its predictions were checked against the CDC’s actual data. […] the Google Flu Trends algorithm over-predicted the number of flu cases by at least 50 per cent during the years it was used.” [For more details on why blind/mindless hypothesis testing/p-value hunting on big data sets is usually a terrible idea, see e.g. Burnham & Anderson, US]

“The data Google used [in the Google Flu Trends algorithm], collected selectively from search engine queries, produced results [with] obvious bias […] for example by eliminating everyone who does not use a computer and everyone using other search engines. Another issue that may have led to poor results was that customers searching Google on ‘flu symptoms’ would probably have explored a number of flu-related websites, resulting in their being counted several times and thus inflating the numbers. In addition, search behaviour changes over time, especially during an epidemic, and this should be taken into account by updating the model regularly. Once errors in prediction start to occur, they tend to cascade, which is what happened with the Google Flu Trends predictions: one week’s errors were passed along to the next week. […] [Similarly,] the Ebola prediction figures published by WHO [during the West African Ebola virus epidemic] were over 50 per cent higher than the cases actually recorded. The problems with both the Google Flu Trends and Ebola analyses were similar in that the prediction algorithms used were based only on initial data and did not take into account changing conditions. Essentially, each of these models assumed that the number of cases would continue to grow at the same rate in the future as they had before the medical intervention began. Clearly, medical and public health measures could be expected to have positive effects and these had not been integrated into the model.”

“Every time a patient visits a doctor’s office or hospital, electronic data is routinely collected. Electronic health records constitute legal documentation of a patient’s healthcare contacts: details such as patient history, medications prescribed, and test results are recorded. Electronic health records may also include sensor data such as Magnetic Resonance Imaging (MRI) scans. The data may be anonymized and pooled for research purposes. It is estimated that in 2015, an average hospital in the USA will store over 600 Tb of data, most of which is unstructured. […] Typically, the human genome contains about 20,000 genes and mapping such a genome requires about 100 Gb of data. […] The interdisciplinary field of bioinformatics has flourished as a consequence of the need to manage and analyze the big data generated by genomics. […] Cloud-based systems give authorized users access to data anywhere in the world. To take just one example, the NHS plans to make patient records available via smartphone by 2018. These developments will inevitably generate more attacks on the data they employ, and considerable effort will need to be expended in the development of effective security methods to ensure the safety of that data. […] There is no absolute certainty on the Web. Since e-documents can be modified and updated without the author’s knowledge, they can easily be manipulated. This situation could be extremely damaging in many different situations, such as the possibility of someone tampering with electronic medical records. […] [S]ome of the problems facing big data systems [include] ensuring they actually work as intended, [that they] can be fixed when they break down, and [that they] are tamper-proof and accessible only to those with the correct authorization.”

“With transactions being made through sales and auction bids, eBay generates approximately 50 Tb of data a day, collected from every search, sale, and bid made on their website by a claimed 160 million active users in 190 countries. […] Amazon collects vast amounts of data including addresses, payment information, and details of everything an individual has ever looked at or bought from them. Amazon uses its data in order to encourage the customer to spend more money with them by trying to do as much of the customer’s market research as possible. In the case of books, for example, Amazon needs to provide not only a huge selection but to focus recommendations on the individual customer. […] Many customers use smartphones with GPS capability, allowing Amazon to collect data showing time and location. This substantial amount of data is used to construct customer profiles allowing similar individuals and their recommendations to be matched. Since 2013, Amazon has been selling customer metadata to advertisers in order to promote their Web services operation […] Netflix collects and uses huge amounts of data to improve customer service, such as offering recommendations to individual customers while endeavouring to provide reliable streaming of its movies. Recommendation is at the heart of the Netflix business model and most of its business is driven by the data-based recommendations it is able to offer customers. Netflix now tracks what you watch, what you browse, what you search for, and the day and time you do all these things. It also records whether you are using an iPad, TV, or something else. […] As well as collecting search data and star ratings, Netflix can now keep records on how often users pause or fast forward, and whether or not they finish watching each programme they start. They also monitor how, when, and where they watched the programme, and a host of other variables too numerous to mention.”

“Data science is becoming a popular study option in universities but graduates so far have been unable to meet the demands of commerce and industry, where positions in data science offer high salaries to experienced applicants. Big data for commercial enterprises is concerned with profit, and disillusionment will set in quickly if an over-burdened data analyst with insufficient experience fails to deliver the expected positive results. All too often, firms are asking for a one-size-fits-all model of data scientist who is expected to be competent in everything from statistical analysis to data storage and data security.”

“In December 2016, Yahoo! announced that a data breach involving over one billion user accounts had occurred in August 2013. Dubbed the biggest ever cyber theft of personal data, or at least the biggest ever divulged by any company, thieves apparently used forged cookies, which allowed them access to accounts without the need for passwords. This followed the disclosure of an attack on Yahoo! in 2014, when 500 million accounts were compromised. […] The list of big data security breaches increases almost daily. Data theft, data ransom, and data sabotage are major concerns in a data-centric world. There have been many scares regarding the security and ownership of personal digital data. Before the digital age we used to keep photos in albums and negatives were our backup. After that, we stored our photos electronically on a hard-drive in our computer. This could possibly fail and we were wise to have back-ups but at least the files were not publicly accessible. Many of us now store data in the Cloud. […] If you store all your photos in the Cloud, it’s highly unlikely with today’s sophisticated systems that you would lose them. On the other hand, if you want to delete something, maybe a photo or video, it becomes difficult to ensure all copies have been deleted. Essentially you have to rely on your provider to do this. Another important issue is controlling who has access to the photos and other data you have uploaded to the Cloud. […] although the Internet and Cloud-based computing are generally thought of as wireless, they are anything but; data is transmitted through fibre-optic cables laid under the oceans. Nearly all digital communication between continents is transmitted in this way. My email will be sent via transatlantic fibre-optic cables, even if I am using a Cloud computing service. The Cloud, an attractive buzz word, conjures up images of satellites sending data across the world, but in reality Cloud services are firmly rooted in a distributed network of data centres providing Internet access, largely through cables. Fibre-optic cables provide the fastest means of data transmission and so are generally preferable to satellites.”

Links:

Health care informatics.
Electronic health records.
European influenza surveillance network.
Overfitting.
Public Health Emergency of International Concern.
Virtual Physiological Human project.
Watson (computer).
Natural language processing.
Anthem medical data breach.
Electronic delay storage automatic calculator (EDSAC). LEO (computer). ICL (International Computers Limited).
E-commerce. Online shopping.
Pay-per-click advertising model. Google AdWords. Click fraud. Targeted advertising.
Recommender system. Collaborative filtering.
Anticipatory shipping.
BlackPOS Malware.
Data Encryption Standard algorithm. EFF DES cracker.
Advanced Encryption Standard.
Tempora. PRISM (surveillance program). Edward Snowden. WikiLeaks. Tor (anonymity network). Silk Road (marketplace). Deep web. Internet of Things.
Songdo International Business District. Smart City.
United Nations Global Pulse.

Advertisements

July 19, 2018 Posted by | Books, Computer science, Cryptography, Data, Engineering, Epidemiology, Statistics | Leave a comment

Developmental Biology (II)

Below I have included some quotes from the middle chapters of the book and some links related to the topic coverage. As I already pointed out earlier, this is an excellent book on these topics.

Germ cells have three key functions: the preservation of the genetic integrity of the germline; the generation of genetic diversity; and the transmission of genetic information to the next generation. In all but the simplest animals, the cells of the germline are the only cells that can give rise to a new organism. So, unlike body cells, which eventually all die, germ cells in a sense outlive the bodies that produced them. They are, therefore, very special cells […] In order that the number of chromosomes is kept constant from generation to generation, germ cells are produced by a specialized type of cell division, called meiosis, which halves the chromosome number. Unless this reduction by meiosis occurred, the number of chromosomes would double each time the egg was fertilized. Germ cells thus contain a single copy of each chromosome and are called haploid, whereas germ-cell precursor cells and the other somatic cells of the body contain two copies and are called diploid. The halving of chromosome number at meiosis means that when egg and sperm come together at fertilization, the diploid number of chromosomes is restored. […] An important property of germ cells is that they remain pluripotent—able to give rise to all the different types of cells in the body. Nevertheless, eggs and sperm in mammals have certain genes differentially switched off during germ-cell development by a process known as genomic imprinting […] Certain genes in eggs and sperm are imprinted, so that the activity of the same gene is different depending on whether it is of maternal or paternal origin. Improper imprinting can lead to developmental abnormalities in humans. At least 80 imprinted genes have been identified in mammals, and some are involved in growth control. […] A number of developmental disorders in humans are associated with imprinted genes. Infants with Prader-Willi syndrome fail to thrive and later can become extremely obese; they also show mental retardation and mental disturbances […] Angelman syndrome results in severe motor and mental retardation. Beckwith-Wiedemann syndrome is due to a generalized disruption of imprinting on a region of chromosome 7 and leads to excessive foetal overgrowth and an increased predisposition to cancer.”

“Sperm are motile cells, typically designed for activating the egg and delivering their nucleus into the egg cytoplasm. They essentially consist of a nucleus, mitochondria to provide an energy source, and a flagellum for movement. The sperm contributes virtually nothing to the organism other than its chromosomes. In mammals, sperm mitochondria are destroyed following fertilization, and so all mitochondria in the animal are of maternal origin. […] Different organisms have different ways of ensuring fertilization by only one sperm. […] Early development is similar in both male and female mammalian embryos, with sexual differences only appearing at later stages. The development of the individual as either male or female is genetically fixed at fertilization by the chromosomal content of the egg and sperm that fuse to form the fertilized egg. […] Each sperm carries either an X or Y chromosome, while the egg has an X. The genetic sex of a mammal is thus established at the moment of conception, when the sperm introduces either an X or a Y chromosome into the egg. […] In the absence of a Y chromosome, the default development of tissues is along the female pathway. […] Unlike animals, plants do not set aside germ cells in the embryo and germ cells are only specified when a flower develops. Any meristem cell can, in principle, give rise to a germ cell of either sex, and there are no sex chromosomes. The great majority of flowering plants give rise to flowers that contain both male and female sexual organs, in which meiosis occurs. The male sexual organs are the stamens; these produce pollen, which contains the male gamete nuclei corresponding to the sperm of animals. At the centre of the flower are the female sex organs, which consist of an ovary of two carpels, which contain the ovules. Each ovule contains an egg cell.”

“The character of specialized cells such as nerve, muscle, or skin is the result of a particular pattern of gene activity that determines which proteins are synthesized. There are more than 200 clearly recognizable differentiated cell types in mammals. How these particular patterns of gene activity develop is a central question in cell differentiation. Gene expression is under a complex set of controls that include the actions of transcription factors, and chemical modification of DNA. External signals play a key role in differentiation by triggering intracellular signalling pathways that affect gene expression. […] the central feature of cell differentiation is a change in gene expression, which brings about a change in the proteins in the cells. The genes expressed in a differentiated cell include not only those for a wide range of ‘housekeeping’ proteins, such as the enzymes involved in energy metabolism, but also genes encoding cell-specific proteins that characterize a fully differentiated cell: hemoglobin in red blood cells, keratin in skin epidermal cells, and muscle-specific actin and myosin protein filaments in muscle. […] several thousand different genes are active in any given cell in the embryo at any one time, though only a small number of these may be involved in specifying cell fate or differentiation. […] Cell differentiation is known to be controlled by a wide range of external signals but it is important to remember that, while these external signals are often referred to as being ‘instructive’, they are ‘selective’, in the sense that the number of developmental options open to a cell at any given time is limited. These options are set by the cell’s internal state which, in turn, reflects its developmental history. External signals cannot, for example, convert an endodermal cell into a muscle or nerve cell. Most of the molecules that act as developmentally important signals between cells during development are proteins or peptides, and their effect is usually to induce a change in gene expression. […] The same external signals can be used again and again with different effects because the cells’ histories are different. […] At least 1,000 different transcription factors are encoded in the genomes of the fly and the nematode, and as many as 3,000 in the human genome. On average, around five different transcription factors act together at a control region […] In general, it can be assumed that activation of each gene involves a unique combination of transcription factors.”

“Stem cells involve some special features in relation to differentiation. A single stem cell can divide to produce two daughter cells, one of which remains a stem cell while the other gives rise to a lineage of differentiating cells. This occurs in our skin and gut all the time and also in the production of blood cells. It also occurs in the embryo. […] Embryonic stem (ES) cells from the inner cell mass of the early mammalian embryo when the primitive streak forms, can, in culture, differentiate into a wide variety of cell types, and have potential uses in regenerative medicine. […] it is now possible to make adult body cells into stem cells, which has important implications for regenerative medicine. […] The goal of regenerative medicine is to restore the structure and function of damaged or diseased tissues. As stem cells can proliferate and differentiate into a wide range of cell types, they are strong candidates for use in cell-replacement therapy, the restoration of tissue function by the introduction of new healthy cells. […] The generation of insulin-producing pancreatic β cells from ES cells to replace those destroyed in type 1 diabetes is a prime medical target. Treatments that direct the differentiation of ES cells towards making endoderm derivatives such as pancreatic cells have been particularly difficult to find. […] The neurodegenerative Parkinson disease is another medical target. […] To generate […] stem cells of the patient’s own tissue type would be a great advantage, and the recent development of induced pluripotent stem cells (iPS cells) offers […] exciting new opportunities. […] There is [however] risk of tumour induction in patients undergoing cell-replacement therapy with ES cells or iPS cells; undifferentiated pluripotent cells introduced into the patient could cause tumours. Only stringent selection procedures that ensure no undifferentiated cells are present in the transplanted cell population will overcome this problem. And it is not yet clear how stable differentiated ES cells and iPS cells will be in the long term.”

“In general, the success rate of cloning by body-cell nuclear transfer in mammals is low, and the reasons for this are not yet well understood. […] Most cloned mammals derived from nuclear transplantation are usually abnormal in some way. The cause of failure is incomplete reprogramming of the donor nucleus to remove all the earlier modifications. A related cause of abnormality may be that the reprogrammed genes have not gone through the normal imprinting process that occurs during germ-cell development, where different genes are silenced in the male and female parents. The abnormalities in adults that do develop from cloned embryos include early death, limb deformities and hypertension in cattle, and immune impairment in mice. All these defects are thought to be due to abnormalities of gene expression that arise from the cloning process. Studies have shown that some 5% of the genes in cloned mice are not correctly expressed and that almost half of the imprinted genes are incorrectly expressed.”

“Organ development involves large numbers of genes and, because of this complexity, general principles can be quite difficult to distinguish. Nevertheless, many of the mechanisms used in organogenesis are similar to those of earlier development, and certain signals are used again and again. Pattern formation in development in a variety of organs can be specified by position information, which is specified by a gradient in some property. […] Not surprisingly, the vascular system, including blood vessels and blood cells, is among the first organ systems to develop in vertebrate embryos, so that oxygen and nutrients can be delivered to the rapidly developing tissues. The defining cell type of the vascular system is the endothelial cell, which forms the lining of the entire circulatory system, including the heart, veins, and arteries. Blood vessels are formed by endothelial cells and these vessels are then covered by connective tissue and smooth muscle cells. Arteries and veins are defined by the direction of blood flow as well as by structural and functional differences; the cells are specified as arterial or venous before they form blood vessels but they can switch identity. […] Differentiation of the vascular cells requires the growth factor VEGF (vascular endothelial growth factor) and its receptors, and VEGF stimulates their proliferation. Expression of the Vegf gene is induced by lack of oxygen and thus an active organ using up oxygen promotes its own vascularization. New blood capillaries are formed by sprouting from pre-existing blood vessels and proliferation of cells at the tip of the sprout. […] During their development, blood vessels navigate along specific paths towards their targets […]. Many solid tumours produce VEGF and other growth factors that stimulate vascular development and so promote the tumour’s growth, and blocking new vessel formation is thus a means of reducing tumour growth. […] In humans, about 1 in 100 live-born infants has some congenital heart malformation, while in utero, heart malformation leading to death of the embryo occurs in between 5 and 10% of conceptions.”

“Separation of the digits […] is due to the programmed cell death of the cells between these digits’ cartilaginous elements. The webbed feet of ducks and other waterfowl are simply the result of less cell death between the digits. […] the death of cells between the digits is essential for separating the digits. The development of the vertebrate nervous system also involves the death of large numbers of neurons.”

Links:

Budding.
Gonad.
Down Syndrome.
Fertilization. In vitro fertilisation. Preimplantation genetic diagnosis.
SRY gene.
X-inactivation. Dosage compensation.
Cellular differentiation.
MyoD.
Signal transduction. Enhancer (genetics).
Epigenetics.
Hematopoiesis. Hematopoietic stem cell transplantation. Hemoglobin. Sickle cell anemia.
Skin. Dermis. Fibroblast. Epidermis.
Skeletal muscle. Myogenesis. Myoblast.
Cloning. Dolly.
Organogenesis.
Limb development. Limb bud. Progress zone model. Apical ectodermal ridge. Polarizing region/Zone of polarizing activity. Sonic hedgehog.
Imaginal disc. Pax6. Aniridia. Neural tube.
Branching morphogenesis.
Pistil.
ABC model of flower development.

July 16, 2018 Posted by | Biology, Books, Botany, Cancer/oncology, Diabetes, Genetics, Medicine, Molecular biology, Ophthalmology | Leave a comment

Big Data (I?)

Below a few observations from the first half of the book, as well as some links related to the topic coverage.

“The data we derive from the Web can be classified as structured, unstructured, or semi-structured. […] Carefully structured and tabulated data is relatively easy to manage and is amenable to statistical analysis, indeed until recently statistical analysis methods could be applied only to structured data. In contrast, unstructured data is not so easily categorized, and includes photos, videos, tweets, and word-processing documents. Once the use of the World Wide Web became widespread, it transpired that many such potential sources of information remained inaccessible because they lacked the structure needed for existing analytical techniques to be applied. However, by identifying key features, data that appears at first sight to be unstructured may not be completely without structure. Emails, for example, contain structured metadata in the heading as well as the actual unstructured message […] and so may be classified as semi-structured data. Metadata tags, which are essentially descriptive references, can be used to add some structure to unstructured data. […] Dealing with unstructured data is challenging: since it cannot be stored in traditional databases or spreadsheets, special tools have had to be developed to extract useful information. […] Approximately 80 per cent of the world’s data is unstructured in the form of text, photos, and images, and so is not amenable to the traditional methods of structured data analysis. ‘Big data’ is now used to refer not just to the total amount of data generated and stored electronically, but also to specific datasets that are large in both size and complexity, with which new algorithmic techniques are required in order to extract useful information from them.”

“In the digital age we are no longer entirely dependent on samples, since we can often collect all the data we need on entire populations. But the size of these increasingly large sets of data cannot alone provide a definition for the term ‘big data’ — we must include complexity in any definition. Instead of carefully constructed samples of ‘small data’ we are now dealing with huge amounts of data that has not been collected with any specific questions in mind and is often unstructured. In order to characterize the key features that make data big and move towards a definition of the term, Doug Laney, writing in 2001, proposed using the three ‘v’s: volume, variety, and velocity. […] ‘Volume’ refers to the amount of electronic data that is now collected and stored, which is growing at an ever-increasing rate. Big data is big, but how big? […] Generally, we can say the volume criterion is met if the dataset is such that we cannot collect, store, and analyse it using traditional computing and statistical methods. […] Although a great variety of data [exists], ultimately it can all be classified as structured, unstructured, or semi-structured. […] Velocity is necessarily connected with volume: the faster the data is generated, the more there is. […] Velocity also refers to the speed at which data is electronically processed. For example, sensor data, such as that generated by an autonomous car, is necessarily generated in real time. If the car is to work reliably, the data […] must be analysed very quickly […] Variability may be considered as an additional dimension of the velocity concept, referring to the changing rates in flow of data […] computer systems are more prone to failure [during peak flow periods]. […] As well as the original three ‘v’s suggested by Laney, we may add ‘veracity’ as a fourth. Veracity refers to the quality of the data being collected. […] Taken together, the four main characteristics of big data – volume, variety, velocity, and veracity – present a considerable challenge in data management.” [As regular readers of this blog might be aware, not everybody would agree with the author here about the inclusion of veracity as a defining feature of big data – “Many have suggested that there are more V’s that are important to the big data problem [than volume, variety & velocity] such as veracity and value (IEEE BigData 2013). Veracity refers to the trustworthiness of the data, and value refers to the value that the data adds to creating knowledge about a topic or situation. While we agree that these are important data characteristics, we do not see these as key features that distinguish big data from regular data. It is important to evaluate the veracity and value of all data, both big and small. (Knoth & Schmid)]

“Anyone who uses a personal computer, laptop, or smartphone accesses data stored in a database. Structured data, such as bank statements and electronic address books, are stored in a relational database. In order to manage all this structured data, a relational database management system (RDBMS) is used to create, maintain, access, and manipulate the data. […] Once […] the database [has been] constructed we can populate it with data and interrogate it using structured query language (SQL). […] An important aspect of relational database design involves a process called normalization which includes reducing data duplication to a minimum and hence reduces storage requirements. This allows speedier queries, but even so as the volume of data increases the performance of these traditional databases decreases. The problem is one of scalability. Since relational databases are essentially designed to run on just one server, as more and more data is added they become slow and unreliable. The only way to achieve scalability is to add more computing power, which has its limits. This is known as vertical scalability. So although structured data is usually stored and managed in an RDBMS, when the data is big, say in terabytes or petabytes and beyond, the RDBMS no longer works efficiently, even for structured data. An important feature of relational databases and a good reason for continuing to use them is that they conform to the following group of properties: atomicity, consistency, isolation, and durability, usually known as ACID. Atomicity ensures that incomplete transactions cannot update the database; consistency excludes invalid data; isolation ensures one transaction does not interfere with another transaction; and durability means that the database must update before the next transaction is carried out. All these are desirable properties but storing and accessing big data, which is mostly unstructured, requires a different approach. […] given the current data explosion there has been intensive research into new storage and management techniques. In order to store these massive datasets, data is distributed across servers. As the number of servers involved increases, the chance of failure at some point also increases, so it is important to have multiple, reliably identical copies of the same data, each stored on a different server. Indeed, with the massive amounts of data now being processed, systems failure is taken as inevitable and so ways of coping with this are built into the methods of storage.”

“A distributed file system (DFS) provides effective and reliable storage for big data across many computers. […] Hadoop DFS [is] one of the most popular DFS […] When we use Hadoop DFS, the data is distributed across many nodes, often tens of thousands of them, physically situated in data centres around the world. […] The NameNode deals with all requests coming in from a client computer; it distributes storage space, and keeps track of storage availability and data location. It also manages all the basic file operations (e.g. opening and closing files) and controls data access by client computers. The DataNodes are responsible for actually storing the data and in order to do so, create, delete, and replicate blocks as necessary. Data replication is an essential feature of the Hadoop DFS. […] It is important that several copies of each block are stored so that if a DataNode fails, other nodes are able to take over and continue with processing tasks without loss of data. […] Data is written to a DataNode only once but will be read by an application many times. […] One of the functions of the NameNode is to determine the best DataNode to use given the current usage, ensuring fast data access and processing. The client computer then accesses the data block from the chosen node. DataNodes are added as and when required by the increased storage requirements, a feature known as horizontal scalability. One of the main advantages of Hadoop DFS over a relational database is that you can collect vast amounts of data, keep adding to it, and, at that time, not yet have any clear idea of what you want to use it for. […] structured data with identifiable rows and columns can be easily stored in a RDBMS while unstructured data can be stored cheaply and readily using a DFS.”

NoSQL is the generic name used to refer to non-relational databases and stands for Not only SQL. […] The non-relational model has some features that are necessary in the management of big data, namely scalability, availability, and performance. With a relational database you cannot keep scaling vertically without loss of function, whereas with NoSQL you scale horizontally and this enables performance to be maintained. […] Within the context of a distributed database system, consistency refers to the requirement that all copies of data should be the same across nodes. […] Availability requires that if a node fails, other nodes still function […] Data, and hence DataNodes, are distributed across physically separate servers and communication between these machines will sometimes fail. When this occurs it is called a network partition. Partition tolerance requires that the system continues to operate even if this happens. In essence, what the CAP [Consistency, Availability, Partition Tolerance] Theorem states is that for any distributed computer system, where the data is shared, only two of these three criteria can be met. There are therefore three possibilities; the system must be: consistent and available, consistent and partition tolerant, or partition tolerant and available. Notice that since in a RDMS the network is not partitioned, only consistency and availability would be of concern and the RDMS model meets both of these criteria. In NoSQL, since we necessarily have partitioning, we have to choose between consistency and availability. By sacrificing availability, we are able to wait until consistency is achieved. If we choose instead to sacrifice consistency it follows that sometimes the data will differ from server to server. The somewhat contrived acronym BASE (Basically Available, Soft, and Eventually consistent) is used as a convenient way of describing this situation. BASE appears to have been chosen in contrast to the ACID properties of relational databases. ‘Soft’ in this context refers to the flexibility in the consistency requirement. The aim is not to abandon any one of these criteria but to find a way of optimizing all three, essentially a compromise. […] The name NoSQL derives from the fact that SQL cannot be used to query these databases. […] There are four main types of non-relational or NoSQL database: key-value, column-based, document, and graph – all useful for storing large amounts of structured and semi-structured data. […] Currently, an approach called NewSQL is finding a niche. […] the aim of this latent technology is to solve the scalability problems associated with the relational model, making it more useable for big data.”

“A popular way of dealing with big data is to divide it up into small chunks and then process each of these individually, which is basically what MapReduce does by spreading the required calculations or queries over many, many computers. […] Bloom filters are particularly suited to applications where storage is an issue and where the data can be thought of as a list. The basic idea behind Bloom filters is that we want to build a system, based on a list of data elements, to answer the question ‘Is X in the list?’ With big datasets, searching through the entire set may be too slow to be useful, so we use a Bloom filter which, being a probabilistic method, is not 100 per cent accurate—the algorithm may decide that an element belongs to the list when actually it does not; but it is a fast, reliable, and storage efficient method of extracting useful knowledge from data. Bloom filters have many applications. For example, they can be used to check whether a particular Web address leads to a malicious website. In this case, the Bloom filter would act as a blacklist of known malicious URLs against which it is possible to check, quickly and accurately, whether it is likely that the one you have just clicked on is safe or not. Web addresses newly found to be malicious can be added to the blacklist. […] A related example is that of malicious email messages, which may be spam or may contain phishing attempts. A Bloom filter provides us with a quick way of checking each email address and hence we would be able to issue a timely warning if appropriate. […] they can [also] provide a very useful way of detecting fraudulent credit card transactions.”

Links:

Data.
Punched card.
Clickstream log.
HTTP cookie.
Australian Square Kilometre Array Pathfinder.
The Millionaire Calculator.
Data mining.
Supervised machine learning.
Unsupervised machine learning.
Statistical classification.
Cluster analysis.
Moore’s Law.
Cloud storage. Cloud computing.
Data compression. Lossless data compression. Lossy data compression.
ASCII. Huffman algorithm. Variable-length encoding.
Data compression ratio.
Grayscale.
Discrete cosine transform.
JPEG.
Bit array. Hash function.
PageRank algorithm.
Common crawl.

July 14, 2018 Posted by | Books, Computer science, Data, Statistics | Leave a comment

American Naval History (II)

I have added some observations and links related to the second half of the book‘s coverage below.

“The revival of the U.S. Navy in the last two decades of the nineteenth century resulted from a variety of circumstances. The most immediate was the simple fact that the several dozen ships retained from the Civil War were getting so old that they had become antiques. […] In 1883 therefore Congress authorized the construction of three new cruisers and one dispatch vessel, its first important naval appropriation since Appomattox. […] By 1896 […] five […] new battleships had been completed and launched, and a sixth (the Iowa) joined them a year later. None of these ships had been built to meet a perceived crisis or a national emergency. Instead the United States had finally embraced the navalist argument that a mature nation-state required a naval force of the first rank. Soon enough circumstances would offer an opportunity to test both the ships and the theory. […] the United States declared war against Spain on April 25, 1898. […] Active hostilities lasted barely six months and were punctuated by two entirely one-sided naval engagements […] With the peace treaty signed in Paris in December 1898, Spain granted Cuba its independence, though the United States assumed significant authority on the island and in 1903 negotiated a lease that gave the U.S. Navy control of Guantánamo Bay on Cuba’s south coast. Spain also ceded the Philippines, Puerto Rico, Guam, and Wake Island to the United States, which paid Spain $20 million for them. Separately but simultaneously the annexation of the Kingdom of Hawaii, along with the previous annexation of Midway, gave the United States a series of Pacific Ocean stepping stones, each a potential refueling stop, that led from Hawaii to Midway, to Wake, to Guam, and to the Philippines. It made the United States not merely a continental power but a global power. […] between 1906 and 1908, no fewer than thirteen new battleships joined the fleet.”

“At root submarine warfare in the twentieth century was simply a more technologically advanced form of commerce raiding. In its objective it resembled both privateering during the American Revolution and the voyages of the CSS Alabama and other raiders during the Civil War. Yet somehow striking unarmed merchant ships from the depths, often without warning, seemed particularly heinous. Just as the use of underwater mines in the Civil War had horrified contemporaries before their use became routine, the employment of submarines against merchant shipping shocked public sentiment in the early months of World War I. […] American submarines accounted for 55 percent of all Japanese ship losses in the Pacific theater of World War II”.

“By late 1942 the first products of the Two-Ocean Navy Act of 1940 began to join the fleet. Whereas in June 1942, the United States had been hard-pressed to assemble three aircraft carriers for the Battle of Midway, a year later twenty-four new Essex-class aircraft carriers joined the fleet, each of them displacing more than 30,000 tons and carrying ninety to one hundred aircraft. Soon afterward nine more Independence-class carriers joined the fleet. […] U.S. shipyards also turned out an unprecedented number of cruisers, destroyers, and destroyer escorts, plus more than 2,700 Liberty Ships—the essential transport and cargo vessels of the war—as well as thousands of specialized landing ships essential to amphibious operations. In 1943 alone American shipyards turned out more than eight hundred of the large LSTs and LCIs, plus more than eight thousand of the smaller landing craft known as Higgins boats […] In the three weeks after D-Day, Allied landing ships and transports put more than 300,000 men, fifty thousand vehicles, and 150,000 tons of supplies ashore on Omaha Beach alone. By the first week of July the Allies had more than a million fully equipped soldiers ashore ready to break out of their enclave in Normandy and Brittany […] Having entered World War II with eleven active battleships and seven aircraft carriers, the U.S. Navy ended the war with 120 battleships and cruisers and nearly one hundred aircraft carriers (including escort carriers). Counting the smaller landing craft, the U.S. Navy listed an astonishing sixty-five thousand vessels on its register of warships and had more than four million men and women in uniform. It was more than twice as large as all the rest of the navies of the world combined. […] In the eighteen months after the end of the war, the navy processed out 3.5 million officers and enlisted personnel who returned to civilian life and their families, going back to work or attending college on the new G.I. Bill. In addition thousands of ships were scrapped or mothballed, assigned to what was designated as the National Defense Reserve Fleet and tied up in long rows at navy yards from California to Virginia. Though the navy boasted only about a thousand ships on active service by the end of 1946, that was still more than twice as many as before the war.”

“The Korean War ended in a stalemate, yet American forces, supported by troops from South Korea and other United Nations members, succeeded in repelling the first cross-border invasion by communist forces during the Cold War. That encouraged American lawmakers to continue support of a robust peacetime navy, and of military forces generally. Whereas U.S. military spending in 1950 had totaled $141 billion, for the rest of the 1950s it averaged over $350 billion per year. […] The overall architecture of American and Soviet rivalry influenced, and even defined, virtually every aspect of American foreign and defense policy in the Cold War years. Even when the issue at hand had little to do with the Soviet Union, every political and military dispute from 1949 onward was likely to be viewed through the prism of how it affected the East-West balance of power. […] For forty years the United States and the U.S. Navy had centered all of its attention on the rivalry with the Soviet Union. All planning for defense budgets, for force structure, and for the design of weapons systems grew out of assessments of the Soviet threat. The dissolution of the Soviet Union therefore compelled navy planners to revisit almost all of their assumptions. It did not erase the need for a global U.S. Navy, for even as the Soviet Union was collapsing, events in the Middle East and elsewhere provoked serial crises that led to the dispatch of U.S. naval combat groups to a variety of hot spots around the world. On the other hand, these new threats were so different from those of the Cold War era that the sophisticated weaponry the United States had developed to deter and, if necessary, defeat the Soviet Union did not necessarily meet the needs of what President George H. W. Bush called “a new world order.”

“The official roster of U.S. Navy warships in 2014 listed 283 “battle force ships” on active service. While that is fewer than at any time since World War I, those ships possess more capability and firepower than the rest of the world’s navies combined. […] For the present, […] as well as for the foreseeable future, the U.S. Navy remains supreme on the oceans of the world.”

Links:

USS Ticonderoga (1862).
Virginius Affair.
ABCD ships.
Stephen Luce. Naval War College.
USS Maine. USS Texas. USS Indiana (BB-1). USS Massachusetts (BB-2). USS Oregon (BB-3). USS Iowa (BB-4).
Benjamin Franklin Tracy.
Alfred Thayer Mahan. The Influence of Sea Power upon History: 1660–1783.
George Dewey.
William T. Sampson.
Great White Fleet.
USS Maine (BB-10). USS Missouri (BB-11). USS New Hampshire (BB-25).
HMS Dreadnought (1906)Dreadnought. Pre-dreadnought battleship.
Hay–Herrán Treaty. United States construction of the Panama canal, 1904–1914.
Bradley A. Fiske.
William S. Benson. Chief of Naval Operations.
RMS Lusitania. Unrestricted submarine warfare.
Battle of Jutland. Naval Act of 1916 (‘Big Navy Act of 1916’).
William Sims.
Sacred Twenty. WAVES.
Washington Naval Treaty. ‘Treaty cruisers‘.
Aircraft carrier. USS Lexington (CV-2). USS Saratoga (CV-3).
War Plan Orange.
Carl Vinson. Naval Act of 1938.
Lend-Lease.
Battle of the Coral Sea. Battle of Midway.
Ironbottom Sound.
Battle of the Atlantic. Wolfpack (naval tactic).
Operation Torch.
Pacific Ocean theater of World War II. Battle of Leyte Gulf.
Operation Overlord. Operation Neptune. Alan Goodrich Kirk. Bertram Ramsay.
Battle of Iwo Jima. Battle of Okinawa.
Cold War. Revolt of the Admirals.
USS Nautilus. SSBN. USS George Washington.
Ohio-class submarine.
UGM-27 PolarisUGM-73 Poseidon. UGM-96 Trident I.
Korean War. Battle of Inchon.
United States Sixth Fleet.
Cuban Missile Crisis.
Vietnam War. USS Maddox. Gulf of Tonkin Resolution. Operation Market Time. Patrol Craft FastPatrol Boat, River. Operation Game Warden.
Elmo Zumwalt. ‘Z-grams’.
USS Cole bombing.
Operation Praying Mantis.
Gulf War.
Combined Task Force 150.
United States Navy SEALs.
USS Zumwalt.

July 12, 2018 Posted by | Books, History, Wikipedia | Leave a comment

100 Cases in Orthopaedics and Rheumatology (II)

Below I have added some links related to the last half of the book’s coverage, as well as some more observations from the book.

Scaphoid fracture. Watson’s test. Dorsal intercalated segment instability. (“Non-union is not uncommon as a complication after scaphoid fractures because the blood supply to this bone is poor. Smokers have a higher incidence of non-union. Occasionally, the blood supply is poor enough to lead to avascular necrosis. If non-union is not detected, subsequent arthritis in the wrist can develop.”)
Septic arthritis. (“Septic arthritis is an orthopaedic emergency. […] People with septic arthritis are typically unwell with fevers and malaise and the joint pain is severe. […] Any acutely hot or painful joint is septic arthritis until proven otherwise.”)
Rheumatoid arthritis. (“[RA is] the most common of the inflammatory arthropathies. […] early-morning stiffness and pain, combined with soft-tissue rather than bony swelling, are classic patterns for inflammatory disease. Although […] RA affects principally the small joints of the hands (and feet), it may progress to involve any synovial joint and may be complicated by extra-articular features […] family history [of the disease] is not unusual due to the presence of susceptibility genes such as HLA-DR. […] Not all patients with RA have rheumatoid factor (RF), and not all patients with RF have RA; ACPA has greater specificity for RA than rheumatoid factor. […] Medical therapy focuses on disease-modifying anti-rheumatic drugs (DMARDs) such as methotrexate, sulphasalazine, leflunomide and hydroxychloroquine which may be used individually or in combination. […] Disease activity in RA is measured by the disease activity score (DAS), which is a composite score of the clinical evidence of synovitis, the current inflammatory response and the patient’s own assessment of their health. […] Patients who have high disease activity as determined by the DAS and have either failed or failed to tolerate standard disease modifying therapy qualify for biologic therapy – monoclonal antibodies that are directed against key components of the inflammatory response. […] TNF-α blockade is highly effective in up to 70 per cent of patients, reducing both inflammation and the progressive structural damage associated with severe active disease.”)
Ankylosing spondylitis. Ankylosis. Schober’s index. Costochondritis.
Mononeuritis multiplex. (“Mononeuritis multiplex arises due to interruption of the vasa nervorum, the blood supply to peripheral nerves […] Mononeuritis multiplex is commonly caused by diabetes or vasculitis. […] Vasculitis – inflammation of blood vessels and subsequent obstruction to blood flow – can be primary (idiopathic) or secondary, in which case it is associated with an underlying condition such as rheumatoid arthritis. The vasculitides are classified according to the size of the vessel involved. […] Management of mononeuritis multiplex is based on potent immunosuppression […] and the treatment of underlying infections such as hepatitis.”)
Multiple myeloma. Bence-Jones protein. (“The combination of bone pain and elevated ESR and calcium is suggestive of multiple myeloma.”)
Osteoporosis. DEXA scan. T-score. (“Postmenopausal bone loss is the most common cause of osteoporosis, but secondary osteoporosis may occur in the context of a number of medical conditions […] Steroid-induced osteoporosis is a significant problem in medical practice. […] All patients receiving corticosteroids should have bone protection […] Pharmacological treatment in the form of calcium supplementation and biphosphonates to reduce osteoclast activity is effective but compliance is typically poor.”)
Osteomalacia. Rickets. Craniotabes.
Paget’s disease (see also this post). (“In practical terms, the main indication to treat Paget’s disease is pain […] although bone deformity or compression syndromes (or risk thereof) would also prompt therapy. The treatment of choice is a biphosphonate to diminish osteoclast activity”).
Stress fracture. Female athlete triad. (“Stress fractures are overuse injuries and occur when periosteal resorption exceeds bone formation. They are commonly seen in two main patient groups: soldiers may suffer so-called march fractures in the metatarsals, while athletes may develop them in different sites according to their sporting activity. Although the knee is a common site in runners due to excess mechanical loading, stress fractures may also result in non-weight-bearing sites due to repetitive and excessive traction […]. The classic symptom […] is of pain that occurs throughout running and crucially persists with rest; this is in contrast to shin splints, a traction injury to the tibial periosteum in which the pain diminishes somewhat with continued activity […] The crucial feature of rehabilitation is a graded return to sport to prevent progression or recurrence.”)
Psoriatic arthritis. (“Arthropathy and rash is a common combination in rheumatology […] Psoriatic arthritis is a common inflammatory arthropathy that affects up to 15 per cent of those with psoriasis. […] Nail disease is very helpful in differentiating psoriatic arthritis from other forms of inflammatory arthropathy.”)
Ehlers–Danlos syndromes. Marfan syndrome. Beighton (hypermobility) score.
Carpal tunnel syndrome. (“Carpal tunnel syndrome is the most common entrapment neuropathy […] The classic symptoms are of tingling in the sensory distribution of the median nerve (i.e. the lateral three and a half digits); loss of thumb abduction is a late feature. Symptoms are often worse at night (when the hand might be quite painful) and in certain postures […] The majority of cases are idiopathic, but pregnancy and rheumatoid arthritis are very common precipitating causes […] The majority of patients will respond well to conservative management […] If these measures fail, corticosteroid injection into the carpal tunnel can be very effective in up to 80 per cent of patients. Surgical decompression should be reserved for those with persistent disabling symptoms or motor loss.”)
Mixed connective tissue disease.
Crystal arthropathy. Tophus. Uric acid nephropathyChondrocalcinosis. (“In any patient presenting with an acutely painful and swollen joint, the most important diagnoses to consider are septic arthritis and crystal arthropathy. Crystal arthropathy such as gout is more common than septic arthritis […] Gout may be precipitated by diuretics, renal impairment and aspirin use”).
Familial Mediterranean fever. Amyloidosis.
Systemic lupus erythematosus (see also this). Jaccoud arthropathy. Lupus nephritis. (“Renal disease is the most feared complication of SLE.”)
Scleroderma. Raynaud’s phenomenon. (“Scleroderma is an uncommon disorder characterized by thickening of the skin and, to a greater or lesser degree, fibrosis of internal organs.”)
Henoch-Schönlein purpura. Cryoglobulinemia. (“Purpura are the result of a spontaneous extravasation of blood from the capillaries into the skin. If small they are known as petechiae, when they are large they are termed ecchymoses. There is an extensive differential diagnosis for purpura […] The combination of palpable purpura (distributed particularly over the buttocks and extensor surfaces of legs), abdominal pain, arthritis and renal disease is a classic presentation of Henoch–Schönlein purpura (HSP). HSP is a distinct and frequently self-limiting small-vessel vasculitis that can affect any age; but the majority of cases present in children aged 2–10 years, in whom the prognosis is more benign than the adult form, often remitting entirely within 3–4 months. The abdominal pain may mimic a surgical abdomen and can presage intussusception, haemorrhage or perforation. The arthritis, in contrast, is relatively mild and tends to affect the knees and ankles.”)
Rheumatic fever.
Erythema nodosum. (“Mild idiopathic erythema nodosum […] needs no specific treatment”).
Rheumatoid lung disease. Bronchiolitis obliterans. Methotrexate-induced pneumonitis. Hamman–Rich syndrome.
Antiphospholipid syndrome. Sapporo criteria. (“Antiphospholipid syndrome is a hypercoagulable state characterized by recurrent arteriovenous thrombosis and/or pregnancy morbidity in the presence of either a lupus anticoagulant or anticardiolipin antibody (both phospholipid-related proteins). […] The most common arteriovenous thrombotic events in antiphospholipid syndrome are deep venous thrombosis and pulmonary embolus […], but any part of the circulation may be involved, with arterial events such as myocardial infarction and stroke carrying a high mortality rate. Poor placental circulation is thought to be responsible for the high pregnancy morbidity, with recurrent first- and second-trimester loss and a higher rate of pre-eclampsia being typical clinical features.”)
Still’s disease. (“Consider inflammatory disease in cases of pyrexia of unknown origin.”)
Polymyalgia rheumatica. Giant cell arteritis. (“[P]olymyalgia rheumatica (PMR) [is] a systemic inflammatory syndrome affecting the elderly that is characterized by bilateral pain and stiffness in the shoulders and hip girdles. The stiffness can be profound and limits mobility although true muscle weakness is not a feature. […] The affected areas are diffusely tender, with movements limited by pain. […] care must be taken not to attribute joint inflammation to PMR until other diagnoses have been excluded; for example, a significant minority of RA patients may present with a polymyalgic onset. […] The treatment for PMR is low-dose corticosteroids. […] Many physicians would consider a dramatic response to low-dose prednisolone as almost diagnostic for PMR, so if a patients symptoms do not improve rapidly it is wise to re-evaluate the original diagnosis.”)
Relapsing polychondritis. (“Relapsing polychondritis is characterized histologically by inflammatory infiltration and later fibrosis of cartilage. Any cartilage, in any location, is at risk. […] Treatment of relapsing polychondritis is with corticosteroids […] Surgical reconstruction of collapsed structures is not an option as the deformity tends to continue postoperatively.”)
Dermatomyositis. Gottron’s Papules.
Enteropathic arthritis. (“A seronegative arthritis may develop in up to 15 per cent of patients with any form of inflammatory bowel disease, including ulcerative colitis (UC), Crohn’s disease or microscopic and collagenous colitis. The most common clinical presentations are a peripheral arthritis […] and spondyloarthritis.”)
Reflex sympathetic dystrophy.
Whipple’s disease. (“Although rare, consider Whipple’s disease in any patient presenting with malabsorption, weight loss and arthritis.”)
Wegener’s granulomatosis. (“Small-vessel vasculitis may cause a pulmonary-renal syndrome. […] The classic triad of Weneger’s granulomatosis is the presence of upper and lower respiratory tract disease and renal impairment.”)
Reactive arthritis. Reiter’s syndrome. (“Consider reactive arthritis in any patient presenting with a monoarthropathy. […] Reactive arthritis is generally benign, with up to 80 per cent making a full recovery.”)
Sarcoidosis. Löfgren syndrome.
Polyarteritis nodosa. (“Consider mesenteric ischaemia in any patient presenting with a systemic illness and postprandial abdominal pain.”)
Sjögren syndrome. Schirmer’s test.
Behçet syndrome.
Lyme disease. Erythema chronicum migrans. (“The combination of rash leading to arthralgia and cranial neuropathy is a classic presentation of Lyme disease.”)
Takayasu arteritis. (“Takayasu’s arteritis is an occlusive vasculitis leading to stenoses of the aorta and its principal branches. The symptoms and signs of the disease depend on the distribution of the affected vessel but upper limbs are generally affected more commonly than the iliac tributaries. […] the disease is a chronic relapsing and remitting condition […] The mainstay of treatment is high-dose corticosteroids plus a steroid-sparing agent such as methotrexate. […] Cyclophosphamide is reserved for those patients who do not achieve remission with standard therapy. Surgical intervention such as bypass or angioplasty may improve ischaemic symptoms once the inflammation is under control.”)
Lymphoma.
Haemarthrosis. (“Consider synovial tumours in a patient with unexplained haemarthrosis.”)
Juvenile idiopathic arthritis.
Drug-induced lupus erythematosus. (“Drug-induced lupus (DIL) generates a different spectrum of clinical manifestations from idiopathic disease. DIL is less severe than idiopathic SLE, and nephritis or central nervous system involvement is very rare. […] The most common drugs responsible for a lupus-like syndrome are procainamide, hydralazine, quinidine, isoniazid, methyldopa, chlorpromazine and minocycline. […] Treatment involves stopping the offending medication and the symptoms will gradually resolve.”)
Churg–Strauss syndrome.

July 8, 2018 Posted by | Books, Cancer/oncology, Cardiology, Gastroenterology, Immunology, Medicine, Nephrology, Neurology, Ophthalmology, Pharmacology | Leave a comment

American Naval History (I?)

This book was okay, but nothing special. Some of the topics covered in the book, those related to naval warfare during the Age of Sail, are topics about which I’ve literally read thousands of pages in the last year alone (I’ve so far read the first 14 books in Patrick O’Brian’s Aubrey-Maturin series, all of which take place during the Napoleonic Wars, and which taken together amounts to ~5000+ pages) – so of course it’s easy for me to spot some of the topics not covered, or not covered in the amount of detail they might have been; I have previously mentioned – and it bears repetition – that despite the fictional setting of the books there is really quite a lot of ‘real history’ in O’Brian’s books, and if you want to know about naval warfare during the period in which these books take place, I highly doubt anything remotely comparable to O’Brian’s works exist. On the other hand this book also covers topics about which I previously would have quite frankly admitted to being more or less completely ignorant, such as naval warfare during the American War of Independence or naval warfare during the American Civil War.

I have deliberately limited my history reading in recent years, and two main reasons I had for deciding to read this one anyway was that a) I figured I needed a relatively ‘light’ non-fiction book (…neither of the two non-fiction books I’m currently reading can incidentally in any way be described as light, but they’re ‘heavy’ in different ways), and b) I knew from experience that wikipedia tends to have a lot of great articles about naval stuff, so I figured even if the book might not be all that great I’d still be able to wiki-binge in featured articles if I felt like it because you’d expect a book like this one to include a lot of names of ships and people and events that might be well covered there, even if they might not be well covered in the book.

Below I’ve added some links related to the books coverage, as well as a few quotes from the book.

“From the start a few Americans dreamed of creating a standing navy constructed on the British model. Their ambition was prompted less by a conviction that such a force might actually be able to contend with the mighty Royal Navy than from a belief that an American navy would confer legitimacy on American nationhood. The first hesitant steps toward the fulfillment of this vision can be traced back to October 13, 1775, when the Continental Congress in Philadelphia agreed to purchase two armed merchantmen to attack British supply ships, the first congressional appropriation of any kind for a maritime force. […] October 13 remains the official birth date of the U.S. Navy. Two months later Congress took a more tangible step toward creating a navy by authorizing the construction of thirteen frigates, and a year later, in November 1776, Congress approved the construction of three ships of the line. This latter decision was stunningly ambitious. Ships of the line consumed prodigious amounts of seasoned timber and scores of heavy iron cannon and required a crew of between six hundred and eight hundred men. […] the subsequent history of these ships provided the skeptics of a standing navy with powerful evidence of the perils of overreach. […] unanticipated delays and unforeseen expenses. […] their record as warships was dismal […] The sad record of these thirteen frigates was so dispiriting that one of the champions of a standing American navy, John Adams, wrote to a friend that when he contemplated the history of the Continental Navy, it was hard for him to avoid tears. […] Washington’s navy were not part of a long-range plan to establish a permanent naval force. Rather they were an ad hoc response to particular circumstances, employed for a specific task in the full expectation that upon its completion they would revert to their former status as fishing schooners and merchant vessels. In that respect Washington’s navy is a useful metaphor for the role of American naval forces in the Revolutionary War and indeed throughout much of the early history of the Republic.”

“Continental Navy ships seized merchant ships whenever they could, but the most effective commerce raiders during the Revolutionary War were scores of privately owned vessels known as privateers. Though often called pirates in the British press, privateers held government-issued letters of marque, which were quite literally licenses to steal. […] Obtaining a letter of marque was relatively easy. Though records are incomplete, somewhere between 1,700 and 2,000 American ship owners applied to Congress for one, though only about eight hundred American privateers actually got to sea. […] Before the war was over, American privateers had captured an estimated six hundred British merchant ships […] The disappointing performance of the Continental Navy and the success of commerce raiding led many Americans of the revolutionary generation to conclude that the job of defending American interests at sea could be done at no cost by hundreds of privateers. Many saw privateers as the militia of the sea: available in time of need yet requiring no public funds to sustain them in peacetime. […] With independence secured, the American militia returned to their farms, and privateersmen once again became merchant seamen. The few Continental Navy warships that had survived the conflict were sold off or given away; the last of them, the frigate Alliance, was auctioned off in 1785 and became a merchant ship on the China trade. Of the three ships of the line authorized nearly seven years earlier, only one had been completed before the war ended, and it never saw active service. Seeing no practical use for such a vessel in peacetime, Congress voted to give her to France […] In effect the American navy simply ceased to exist.”

“The interminable Anglo-French conflict, which had worked decisively to America’s advantage during the Revolution, proved troublesome after 1793, when British diplomats convinced Portugal to join an anti-French coalition. In order to have the means to do so, Portugal signed a peace treaty with the city-state of Algiers on the north coast of Africa and ended its regular patrols of the Straits of Gibraltar. Almost at once raiding ships from Algiers passed out into the Atlantic, where they began to attack American shipping. The attacks provoked earnest discussion in Philadelphia about how to respond. It was evident that unleashing American privateers against the Algerines would have no effect at all, for the Barbary states had scant merchant trade for them to seize. What was needed was a national naval force that could both protect American commerce and punish those who attacked it. Appreciation of that reality led to a bill in Congress to authorize “a naval force, adequate to the protection of the commerce of the United States against the Algerine corsairs.” Once again the idea was not to create a permanent naval establishment but to produce a temporary force to meet an immediate need. The specific proposal was for the construction of six large frigates, a decision that essentially founded the U.S. Navy, though only a few of those who supported the bill conceived of it in those terms. Most saw it as a short-term solution to an immediate problem […] There were delays and unforeseen expenses in the construction process, and none of the ships had been completed when news arrived that American negotiators had concluded a treaty of peace with Algiers. Under its terms the United States would present Algiers with a thirty-six-gun frigate and pay $642,500, plus an additional annual payment of $21,600 in naval stores. In exchange Algiers would pledge not to attack American vessels. To modern eyes such terms are offensive — no better than simple extortion. But in 1795 paying extortion was the standard protocol for Western powers in dealing with the North African city-states.”

“Compared to ships of the line, or even to frigates, gunboats were tiny; most were only sixty to eighty feet long and had only a single mast and often only a single gun, generally a 24- or 32-pounder. They were also inexpensive; at roughly $5,000 each, more than two dozen of them could be had for the price of a single frigate. They were also strictly defensive weapons and therefore unlikely to provoke a confrontation with Britain. They appealed to the advocates of a militia-based naval force because when they were not in active service, they could be laid up in large sheds or barns. […] During Jefferson’s second term (1805-9) the United States built more than a hundred of these gunboats, boasting a total of 172 of them by the late summer of 1809. […] By building a gunboat navy, Jefferson provided a veneer of defense for the coast without sailing into the dangerous waters of the Anglo-French conflict. […] The [later] disappointing performance of the gunboats [during the War of 1812], especially when compared to the success of the frigates on the high seas, discredited the idea of relying on them for the nation’s maritime defense.”

“The kinds of tasks assigned to the U.S. Navy after 1820 were simply inappropriate for […] huge – and expensive to operate – warships. The day-to-day duties of the U.S. Navy involved dealing with smugglers, pirates, and the illegal slave trade, and deployment of ships of the line to deal with such issues was like hitting a tack with a sledgehammer. […] Pirates had always been a concern in the West Indies, but their numbers increased dramatically during the 1820s […]. Beginning in 1810 several of Spain’s unhappy colonies in Central and South America initiated efforts to win their independence via wars of liberation. These revolutionary governments were generous in passing out letters of marque to prey on Spanish trade. Operating mostly in tiny single-masted cutters and schooners—even the occasional rowboat—these privateers found slim pickings in targeting Spanish vessels, and they soon began to seize any merchant ship they could catch. By 1820 most of them had metamorphosed from licensed privateering into open piracy, and in 1822 the U.S. Navy established the West Indies Squadron to deal with them. […] Pirates were a problem in other parts of the world too. One trouble spot was in the Far East, especially in the much-traveled Straits of Malacca between Malaya and Sumatra.”

“Congress had declared the importation of slaves from Africa illegal after January 1, 1808, though there was no serious effort to interdict that human traffic until 1821, when the Navy established an African Squadron. Almost at once, however, its mission became controversial. […] After only two years Congress withdrew its support, and the African Squadron ceased to exist. After that only the Royal Navy made any serious effort to suppress the slave trade. The owners of the illicit slave ships saw an opportunity in these circumstances. Aware of how sensitive the Americans were about interference with their ships, slavers of every nationality — or no nationality at all — began flying the Stars and Stripes in order to deter inspection by the British. When the British saw through this ruse and stopped the ships anyway, the United States objected on principle. This Anglo-American dispute was resolved in the Webster-Ashburton Treaty of 1842 […] By its terms the British pledged to stop searching vessels flying the American flag, and the Americans pledged to police those vessels themselves”.

“Until the 1840s a young man became an officer in the U.S. Navy by being appointed a midshipman as a teenager and learning on the job while at sea. When he felt ready, he took an exam, which, if passed, made him a passed midshipman eligible for appointment to lieutenant when a vacancy occurred. With the emergence of steam engines as well as larger and more complex ordnance, aspiring officers had to master more technical and theoretical subjects. It was partly because of this that the U.S. Naval Academy was established […] in 1845 […] Another change during the 1850s was the abolition of flogging […] Given the rough character of the enlisted force, physical punishment was the standard penalty for a wide variety of major and minor infractions, and ship captains could prescribe anywhere from a dozen to a hundred lashes depending on the seriousness of the offense. For most such punishments all hands were called to bear witness in the belief that this offered a profound deterrent to future misconduct. It was unquestionably barbarous, but also effective, and it had been a part of naval life for more than a century. Nevertheless in September 1850 Congress declared it illegal. […] A decade later, in the midst of the Civil War, the U.S. Navy abolished another long-standing tradition, this one much beloved by the enlisted sailors. This was the daily grog ration: a half pint of rum or whisky, cut with water, that was issued to every sailor on board, even teenagers, once a day. Though the tradition was common to all navies and predated American independence, the United States was the first nation to abolish it, on September 1, 1862.”

“For more than two centuries naval warships had changed little. Wooden-hull ships propelled by sails carried muzzle-loaded iron gun tubes that fired solid shot. By 1850, however, that was changing, and changing swiftly. […] Over the ensuing decade steam ships became more ubiquitous as they became more efficient. Naval guns became much larger […] and the projectiles they fired were no longer merely solid iron balls but explosive shells. All of this occurred just in time to have a dramatic influence on the navies that fought in the American Civil War. […] by the 1850s [US] lawmakers recognized that the nation’s wooden sailing navy, much of it left over from the War of 1812, was growing increasingly obsolete, and as a result Congress passed a number of bills to modernize the navy. […] though the U.S. Navy remained small by European standards, when the Civil War began, more than half of the forty-two ships on active service were of the newest and most efficient type. By contrast, the Confederate States began the Civil War with no navy at all, and the South embraced the traditional policies of the weaker naval power: harbor defense and commerce raiding. […] Over the next […] years both sides built more ironclad warships.”

“[T]he Union could, and did, simply outbuild the Confederacy. Before the war was over, the Union produced more than sixty monitor-type ironclads, each class of them larger and more powerfully armed than the one before. […] by the spring of 1865, when Lee surrendered his army to Grant, the navy had grown to sixteen times its prewar size and boasted some of the most advanced warships in the world. […] When the Civil War ended, the U.S. Navy boasted a total of 671 warships, all but a few of them steamers, many of them ironclads, and some that were the most advanced of their type. Yet within a decade all but a few dozen had been sold off, scrapped, or placed in ordinary—mothballed for a future crisis. Conforming to the now familiar pattern, after a dramatic expansion to meet a crisis, the navy swiftly contracted at almost the moment the crisis ended. By 1870 the U.S. Navy had only fifty-two ships on active service. […] The advent of iron-armored warships during the Civil War fell short of being a full-scale technological revolution. Ever thicker armor led to ever larger naval guns, until it became evident that to make a ship invulnerable would render her virtually immobile. Armor continued to be used in warship construction after the war, but it was applied selectively, to protect engine spaces or magazines. […] While it did not affect the outcome of the war, Confederate commerce raiding did inflict a disproportionate amount of damage on Union shipping for a relatively small investment. Altogether Confederate commerce raiders captured and destroyed some 284 U.S. merchant ships.”

Links:

USS Hannah. USS Lee. HMS Thunderer (1760). USS Warren (1776). USS Hancock (1776). Governor Trumbull (1777 ship). HMS Drake (1777). HMS Serapis (1779). USS Chesapeake (1799).
William Howe, 5th Viscount Howe. Richard Howe, 1st Earl Howe. Benedict Arnold. John Paul Jones. Esek Hopkins. Richard Pearson. François Joseph Paul de Grasse. Charles Cornwallis, 1st Marquess Cornwallis. Thomas Graves, 1st Baron Graves. Richard Dale. Yusuf Karamanli. Richard Valentine Morris. Edward Preble. Stephen Decatur. James Barron.
Ship of the line.
Frigate.
Two-decker.
PrivateerLetter of marque. Commerce raiding.
Battle of Valcour Island. Battles of Saratoga. Battle of the Chesapeake.
Peace of Paris (1783).
Jay’s Treaty.
XYZ Affair. Quasi-War. Treaty of Mortefontaine.
First Barbary War.
Battle of Trafalgar. Battle of Austerlitz.
An Act for the relief of sick and disabled seamen.
Warhawks. War of 1812. Treaty of Ghent.
Board of Navy Commissioners.
USS Potomac (1822).
James Biddle.
Stephen Cassin.
Cornelius Stribling.
Missouri Compromise.
Matthew Fontaine Maury.
United States Exploring Expedition.
Matthew C. Perry. Bakumatsu. Convention of Kanagawa.
Adams–Onís Treaty.
Era of Good Feelings.
Mexican–American War.
USS Princeton (1843).
Anaconda Plan. Union blockade.
H. L. Hunley (submarine).
CSS Alabama. CSS Shenandoah.

July 7, 2018 Posted by | Books, History | Leave a comment

Words

The words included in this post are words which I encountered while reading the books: 100 cases in orthopaedics and rheumatology, Managing Gastrointestinal Complications of Diabetes, American Naval History: A very short introduction, Big Data: A very short introduction, Faust among Equals, Pocket Oncology, My Hero, and Odds and Gods.

Angulation. Soleus. Mucoid. Plantarflex. Pronation. Arthrosis. Syndesmosis. Ecchymosis. Diastasis. Epicondyle. Pucker. Enthesopathy. Paresis. Polyostotic. Riff. Livedo. Aphtha/aphthous. Pathergy. Annular. Synovium/synovial.

Scallop. Tastant. Incantatory. Radeau. Gundalow. Scrivener. Pebbledash. Chrominance. Tittle. Capitonym. Scot. Grayling. Terylene. Pied-à-terre. Solenoid. Fen. Anaglypta. Loud-hailer. Fauteuil. Dimpsy.

Seborrhea. Anasarca. Emetogenic. Trachelectomy. Brachytherapy. Nomogram. Trusty. Biff. Pantechnicon. Porpentine. Budgerigar. Nerk. Glade. Slinky. Gelignite. Boater. Seamless. Jabberwocky. Fardel. Kapok.

Aspidistra. Cowpat. Countershaft. Tinny. Ponce. Warp. Weft. Recension. Bandstand. Strimmer. Chasuble. Champer. Bourn. Khazi. Zimmer. Ossuary. Suppliant. Nock. Taramosalata. Quoit.

July 6, 2018 Posted by | Books, Language | Leave a comment

100 Cases in Orthopaedics and Rheumatology (I)

This book was decent, but it’s not as good as some of the books I’ve previously read in this series; in some of the books in the series the average length of the answer section is 2-3 pages, which is a format I quite like, whereas in this book the average is more like 1-2 pages – which is a bit too short in my opinion.

Below I have added some links related to the first half of the book’s coverage and a few observations from the book.

Acute haematogenous osteomyelitis. (“There are two principal types of acute osteomyelitis: •haematogenous osteomyelitis •direct or contiguous inoculation osteomyelitis. Acute haematogenous osteomyelitis is characterized by an acute infection of the bone caused by the seeding of the bacteria within the bone from a remote source. This condition occurs primarily in children. […] In general, osteomyelitis has a bimodal age distribution. Acute haematogenous osteomyelitis is primarily a disease in children. Direct trauma and contiguous focus osteomyelitis are more common among adults and adolescents than in children. Spinal osteomyelitis is more common in individuals older than 45 years.”)
Haemophilic arthropathy. (“Haemophilic arthropathy is a condition associated with clotting disorder leading to recurrent bleeding in the joints. Over time this can lead to joint destruction.”)
Avascular necrosis of the femoral head. Trendelenburg’s sign. Gaucher’s disease. Legg–Calvé–Perthes disease. Ficat and Arlet classification of avascular necrosis of femoral head.
Osteosarcoma. Codman triangle. Enneking Classification. (“A firm, irregular mass fixed to underlying structures is more suspicious of a malignant lesion.”)
Ewing’s sarcomaHaversian canal. (“This condition [ES] typically occurs in young patients and presents with pain and fever. [It] is the second most common primary malignant bone tumour (the first being osteosarcoma). The tumour is more common in males and affects children and young adults. The majority develop this between the ages of 10 and 20 years. […] The earliest symptom is pain, which is initially intermittent but becomes intense. Rarely, a patient may present with a pathological fracture. Eighty-five per cent of patients have chromosomal translocations associated with the 11/22 chromosome. Ewing’s sarcoma is potentially the most aggressive form of the primary bone tumours. […] Patients are usually assigned to one of two groups, the tumour being classified as either localized or metastatic disease. Tumours in the pelvis typically present late and are therefore larger with a poorer prognosis. Treatment comprises chemotherapy, surgical resection and/or radiotherapy. […] With localized disease, wide surgical excision of the tumour is preferred over radiotherapy if the involved bone is expendable (e.g. fibular, rib), or if radiotherapy would damage the growth plate. […] Non-metastatic disease survival rates are 55–70 per cent, compared to 22–33 per cent for metastatic disease. Patients require careful follow-up owing to the risk of developing osteosarcoma following radiotherapy, particularly in children in whom it can occur in up to 20 per cent of cases.”
Clavicle Fracture. Floating Shoulder.
Proximal humerus fractures.
Lateral condyle fracture of the humerus. Salter-Harris fracture. (“Humeral condyle fractures occur most commonly between 6 and 10 years of age. […] fractures often appear subtle on radiographs. […] Operative management is essential for all displaced fractures“).
Distal radius fracture. (“Colles’ fractures account for over 90 per cent of distal radius fractures. Any injury to the median nerve can produce paraesthesia in the thumb, index finger, and middle and radial border of the ring finger […]. There is a bimodal age distribution of fractures to the distal radius with two peaks occurring. The first peak occurs in people aged 18–25 years, and a second peak in older people (>65 years). High-energy injuries are more common in the younger group and low-energy injuries in the older group. Osteoporosis may play a role in the occurrence of this later fracture. In the group of patients between 60 and 69 years, women far outnumber men. […] Assessment with plain radiographs is all that is needed for most fractures. […] The majority of distal radius fractures can be treated conservatively.”)
Gamekeeper’s thumb. Stener lesion.
Subtrochanteric Hip Fracture.
Supracondylar Femur Fractures. (“There is a bimodal distribution of fractures based on age and gender. Most high-energy distal femur fractures occur in males aged between 15 and 50 years, while most low-energy fractures occur in osteoporotic women aged 50 or above. The most common high-energy mechanism of injury is a road traffic accident (RTA), and the most common low-energy mechanism is a fall. […] In general, […] non-operative treatment does not work well for displaced fractures. […] Operative intervention is also indicated in the presence of open fractures and injuries associated with vascular injury. […] Total knee replacement is effective in elderly patients with articular fractures and significant osteoporosis, or pre-existing arthritis that is not amenable to open reduction and internal fixation. Low-demand elderly patients with non- or minimally displaced fractures can be managed conservatively. […] In general, this fracture can take a minimum of 3-4 months to unite.”)
Supracondylar humerus fracture. Gartland Classification of Supracondylar Humerus Fractures. (“Prior to the treatment of supracondylar fractures, it is essential to identify the type. Examination of the degree of swelling and deformity as well as a neurological and vascular status assessment of the forearm is essential. A vascular injury may present with signs of an acute compartment syndrome with pain, paraesthesia, pallor, and pulseless and tight forearm. Injury to the brachial artery may present with loss of the distal pulse. However, in the presence of a weak distal pulse, major vessel injury may still be present owing to the collateral circulation. […] Vascular insult can lead to Volkmann ischaemic contracture of the forearm. […] Malunion of the fracture may lead to cubitus varus deformity.”)
Femoral Shaft Fractures.
Femoral Neck Fractures. Garden’s classification. (“Hip fractures are the most common reason for admission to an orthopaedic ward, usually caused by a fall by an elderly person. The average age of a person with a hip fracture is 77 years. Mortality is high: about 10 per cent of people with a hip fracture die within 1 month, and about one-third within 12 months. However, fewer than half of deaths are attributable to the fracture, reflecting the high prevalence of comorbidity. The mental status of the patient is also important: senility is associated with a three-fold increased risk of sepsis and dislocation of prosthetic replacement when compared with mentally alert patients. The one-year mortality rate in these patients is considerable, being reported as high as 50 per cent.”)
Tibia Shaft Fractures. (“The tibia is the most frequent site of a long-bone fracture in the body. […] Open fractures are surgical emergencies […] Most closed tibial fractures can be treated conservatively using plaster of Paris.”)
Tibial plateau fracture. Schatzker classification.
Compartment syndrome. (“This condition is an orthopaedic emergency and can be limb- and life-threatening. Compartment syndrome occurs when perfusion pressure falls below tissue pressure in a closed fascial compartment and results in microvascular compromise. At this point, blood flow through the capillaries stops. In the absence of flow, oxygen delivery stops. Hypoxic injury causes cells to release vasoactive substances (e.g. histamine, serotonin), which increase endothelial permeability. Capillaries allow continued fluid loss, which increases tissue pressure and advances injury. Nerve conduction slows, tissue pH falls due to anaerobic metabolism, surrounding tissue suffers further damage, and muscle tissue suffers necrosis, releasing myoglobin. In untreated cases the syndrome can lead to permanent functional impairment, renal failure secondary to rhabdomyolysis, and death. Patients at risk of compartment syndrome include those with high-velocity injuries, long-bone fractures, high-energy trauma, penetrating injuries such as gunshot wounds and stabbing, and crush injuries, as well as patients on anticoagulants with trauma. The patient usually complains of severe pain that is out of proportion to the injury. An assessment of the affected limb may reveal swelling which feels tense, or hard compartments. Pain on passive range of movement of fingers or toes of the affected limb is a typical feature. Late signs comprise pallor, paralysis, paraesthesia and a pulseless limb. Sensory nerves begin to lose conductive ability, followed by motor nerves. […] Fasciotomy is the definitive treatment for compartment syndrome. The purpose of fasciotomy is to achieve prompt and adequate decompression so as to restore the tissue perfusion.”)
Talus fracture. Hawkins sign. Avascular necrosis.
Calcaneal fracture. (“The most common situation leading to calcaneal fracture is a young adult who falls from a height and lands on his or her feet. […] Patients often sustain occult injuries to their lumbar or cervical spine, and the proximal femur. A thorough clinical and radiological investigation of the spine area is mandatory in patients with calcaneal fracture.”)
Idiopathic scoliosis. Adam’s forward bend test. Romberg test. Cobb angle.
Cauda equina syndrome. (“[Cauda equina syndrome] is an orthopaedic emergency. The condition is characterized by the red-flag signs comprising low back pain, unilateral or bilateral sciatica, saddle anaesthesia with sacral sparing, and bladder and bowel dysfunctions. Urinary retention is the most consistent finding. […] Urgent spinal orthopaedic or neurosurgical consulation is essential, with transfer to a unit capable of undertaking any definitive surgery considered necessary. In the long term, residual weakness, incontinence, impotence and/or sensory abnormalities are potential problems if therapy is delayed. […] The prognosis improves if a definitive cause is identified and appropriate surgical spinal decompression occurs early. Late surgical compression produces varying results and is often associated with a poorer outcome.”)
Developmental dysplasia of the hip.
OsteoarthritisArthroplasty. OsteotomyArthrodesis. (“Early-morning stiffness that gradually diminishes with activity is typical of osteoarthritis. […] Patients with hip pathology can sometimes present with knee pain without any groin or thigh symptoms. […] Osteoarthritis most commonly affects middle-aged and elderly patients. Any synovial joint can develop osteoarthritis. This condition can lead to degeneration of articular cartilage and is often associated with stiffness.”)
Prepatellar bursitis.
Baker’s cyst.
Meniscus tear. McMurray test. Apley’s test. Lachman test.
Anterior cruciate ligament injury.
Achilles tendon rupture. Thompson Test.
Congenital Talipes EquinovarusPonseti method. Pirani score. (“Club foot is bilateral in about 50 per cent of cases and occurs in approximately 1 in 800 births.”)
Charcot–Marie–Tooth disease. Pes cavus. Claw toe deformity. Pes planus.
Hallux valgus. Hallux Rigidus.
Mallet toe deformity. Condylectomy. Syme amputation. (“Mallet toes are common in diabetics with peripheral neuropathy. […] Pain and/or a callosity is often the presenting complaint. This may also lead to nail deformity on the toe. Most commonly the deformity is present in the second toe. […] Footwear modification […] should be tried first […] Surgical management of mallet toe is indicated if the deformity becomes painful.”)
Hammer Toe.
Lisfranc injury. Fleck sign. (“The Lisfranc joint, which represents the articulation between the midfoot and forefoot, is composed of the five TMT [tarsometatarsal] joints. […] A Lisfranc injury encompasses everything from a sprain to a complete disruption of normal anatomy through the TMT joints. […] Lisfranc injuries are commonly undiagnosed and carry a high risk of chronic secondary disability.”)
Charcot joint. (“Charcot arthropathy results in progressive destruction of bone and soft tissues at weight-bearing joints. In its most severe form it may cause significant disruption of the bony architecture, including joint dislocations and fractures. Charcot arthropathy can occur at any joint but most commonly affects the lower regions: the foot and ankle. Bilateral disease occurs in fewer than 10 per cent of patients. Any condition that leads to a sensory or autonomic neuropathy can cause a Charcot joint. Charcot arthropathy can occur as a complication of diabetes, syphilis, alcoholism, leprosy, meningomyleocele, spinal cord injury, syringomyelia, renal dialysis and congenital insensitivity to pain. In the majority of cases, non-operative methods are preferred. The principles of management are to provide immobilization of the affected joint and reduce any areas of stress on the skin. Immobilization is usually accomplished by casting.”)
Lateral epicondylitis (tennis elbow). (“For work-related lateral epicondylitis, a systematic review identified three risk factors: handling tools heavier than 1 kg, handling loads heavier than 20 kg at least ten times per day, and repetitive movements for more than two hours per day. […] Up to 95 per cent of patients with tennis elbow respond to conservative measures.”)
Medial Epicondylitis.
De Quervain’s tenosynovitis. Finkelstein test. Intersection syndrome. Wartenberg’s syndrome.
Trigger finger.
Adhesive capsulitis (‘frozen shoulder’). (“Frozen shoulder typically has three phases: the painful phase, the stiffening phase and the thawing phase. During the initial phase there is a gradual onset of diffuse shoulder pain lasting from weeks to months. The stiffening phase is characterized by a progressive loss of motion that may last up to a year. The majority of patients lose glenohumeral external rotation, internal rotation and abduction during this phase. The final, thawing phase ranges from weeks to months and constitutes a period of gradual motion improvement. Once in this phase, the patient may require up to 9 months to regain a fully functional range of motion. There is a higher incidence of frozen shoulder in patients with diabetes compared with the general population. The incidence among patients with insulin-dependent diabetes is even higher, with an increased frequency of bilateral frozen shoulder. Adhesive capsulitis has also been reported in patients with hyperthyroidism, ischaemic heart disease, and cervical spondylosis. Non-steroidal anti-inflammatory drugs (NSAIDs) are recommended in the initial treatment phase. […] A subgroup of patients with frozen shoulder syndrome often fail to improve despite conservative measures. In these cases, interventions such as manipulation, distension arthrography or open surgical release may be beneficial.” [A while back I covered some papers on adhesive capsulitis and diabetes here (part iii) – US].
Dupuytren’s Disease. (“Dupuytren’s contracture is a benign, slowly progressive fibroproliferative disease of the palmar fascia. […] The disease presents most commonly in the ring and little fingers and is bilateral in 45 per cent of cases. […] Dupuytren’s disease is more common in males and people of northern European origin. It can be associated with prior hand trauma, alcoholic cirrhosis, epilepsy (due to medications such as phenytoin), and diabetes. [“Dupuytren’s disease […] may be observed in up to 42% of adults with diabetes mellitus, typically in patients with long-standing T1D” – I usually don’t like such unspecific reported prevalences (what does ‘up to’ really mean?), but the point is that this is not a 1 in a 100 complication among diabetics; it seems to be a relatively common complication in type 1 DM – US] The prevalence increases with age. Mild cases may not need any treatment. Surgery is indicated in progressive contractures and established deformity […] Recurrence or extension of the disease after operation is not uncommon”).

July 1, 2018 Posted by | Books, Cancer/oncology, Diabetes, Medicine, Neurology | Leave a comment

Frontiers in Statistical Quality Control (I)

The XIth International Workshop on Intelligent Statistical Quality Control took place in Sydney, Australia from August 20 to August 23, 2013. […] The 23 papers in this volume were carefully selected by the scientific program committee, reviewed by its members, revised by the authors and, finally, adapted by the editors for this volume. The focus of the book lies on three major areas of statistical quality control: statistical process control (SPC), acceptance sampling and design of experiments. The majority of the papers deal with statistical process control while acceptance sampling, and design of experiments are treated to a lesser extent.”

I’m currently reading this book. It’s quite technical and a bit longer than many of the other non-fiction books I’ve read this year (…but shorter than others; however it is still ~400 pages of content exclusively devoted to statistical papers), so it may take me a while to finish it. I figured the fact that I may not finish the book in a while was not a good argument against blogging relevant sections of the book now, especially as it’s already been some time since I read the first few chapters.

When reading a book like this one I care a lot more about understanding the concepts than about understanding the proofs, so as usual the amount of math included in the post is limited; please don’t assume it’s because there are no equations in the book.

Below I have added some ideas and observations from the first 100 pages or so of the book’s coverage.

“A growing number of [statistical quality control] applications involve monitoring with rare event data. […] The most common approaches for monitoring such processes involve using an exponential distribution to model the time between the events or using a Bernoulli distribution to model whether or not each opportunity for the event results in its occurrence. The use of a sequence of independent Bernoulli random variables leads to a geometric distribution for the number of non-occurrences between the occurrences of the rare events. One surveillance method is to use a power transformation on the exponential or geometric observations to achieve approximate normality of the in control distribution and then use a standard individuals control chart. We add to the argument that use of this approach is very counterproductive and cover some alternative approaches. We discuss the choice of appropriate performance metrics. […] Most often the focus is on detecting process deterioration, i.e., an increase in the probability of the adverse event or a decrease in the average time between events. Szarka and Woodall (2011) reviewed the extensive number of methods that have been proposed for monitoring processes using Bernoulli data. Generally, it is difficult to better the performance of the Bernoulli cumulative sum (CUSUM) chart of Reynolds and Stoumbos (1999). The Bernoulli and geometric CUSUM charts can be designed to be equivalent […] Levinson (2011) argued that control charts should not be used with healthcare rare event data because in many situations there is an assignable cause for each error, e.g., each hospital-acquired infection or serious prescription error, and each incident should be investigated. We agree that serious adverse events should be investigated whether or not they result in a control chart signal. The investigation of rare adverse events, however, and the implementation of process improvements to prevent future such errors, does not preclude using a control chart to determine if the rate of such events has increased or decreased over time. In fact, a control chart can be used to evaluate the success of any process improvement initiative.”

“The choice of appropriate performance metrics for comparing surveillance schemes for monitoring Bernoulli and exponential data is quite important. The usual Average Run Length (ARL) metric refers to the average number of points plotted on the chart until a signal is given. This metric is most clearly appropriate when the time between the plotted points is constant. […] In some cases, such as in monitoring the number of near-miss accidents, it may be informative to use a metric that reflects the actual time required to obtain an out-of-control signal. Thus one can consider the number of Bernoulli trials until an out-of-control signal is given for Bernoulli data, leading to its average, the ANOS. The ANOS will be proportional to the average time before a signal if the rate at which the Bernoulli trials are observed is constant over time. For exponentially distributed data one could consider the average time to signal, the ATS. If the process is stable, then ANOS = ARL / p and ATS = ARS * θ, where p and θ are the Bernoulli probability and the exponential mean, respectively. […] To assess out-of-control performance we believe it is most realistic to consider steady-state performance where the shift in the parameter occurs at some time after monitoring has begun. […] Under this scenario one cannot easily convert the ARL metric to the ANOS and ATS metrics. Consideration of steady state performance of competing methods is important because some methods have an implicit headstart feature that results in good zero-state performance, but poor steady-state performance.”

“Data aggregation is frequently done when monitoring rare events and for count data generally. For example, one might monitor the number of accidents per month in a plant or the number of patient falls per week in a hospital. […] Schuh et al. (2013) showed […] that there can be significantly long expected delays in detecting process deterioration when data are aggregated over time even when there are few samples with zero events. One can always aggregate data over long enough time periods to avoid zero counts, but the consequence is slower detection of increases in the rate of the adverse event. […] aggregating event data over fixed time intervals, as frequently done in practice, can result in significant delays in detecting increases in the rate of adverse events. […] Another type of aggregation is to wait until one has observed a given number of events before updating a control chart based on a proportion or waiting time. […] This type of aggregation […] does not appear to delay the detection of process changes nearly as much as aggregating data over fixed time periods. […] We believe that the adverse effect of aggregating data over time has not been fully appreciated in practice and more research work is needed on this topic. Only a couple of the most basic scenarios for count data have been studied. […] Virtually all of the work on monitoring the rate of rare events is based on the assumption that there is a sustained shift in the rate. In some applications the rate change may be transient. In this scenario other performance metrics would be needed, such as the probability of detecting the process shift during the transient period. The effect of data aggregation over time might be larger if shifts in the parameter are not sustained.”

Big data is a popular term that is used to describe the large, diverse, complex and/or longitudinal datasets generated from a variety of instruments, sensors and/or computer-based transactions. […] The acquisition of data does not automatically transfer to new knowledge about the system under study. […] To be able to gain knowledge from big data, it is imperative to understand both the scale and scope of big data. The challenges with processing and analyzing big data are not only limited to the size of the data. These challenges include the size, or volume, as well as the variety and velocity of the data (Zikopoulos et al. 2012). Known as the 3V’s, the volume, variety, and/or velocity of the data are the three main characteristics that distinguish big data from the data we have had in the past. […] Many have suggested that there are more V’s that are important to the big data problem such as veracity and value (IEEE BigData 2013). Veracity refers to the trustworthiness of the data, and value refers to the value that the data adds to creating knowledge about a topic or situation. While we agree that these are important data characteristics, we do not see these as key features that distinguish big data from regular data. It is important to evaluate the veracity and value of all data, both big and small. Both veracity and value are related to the concept of data quality, an important research area in the Information Systems (IS) literature for more than 50 years. The research literature discussing the aspects and measures of data quality is extensive in the IS field, but seems to have reached a general agreement that the multiple aspects of data quality can be grouped into several broad categories […]. Two of the categories relevant here are contextual and intrinsic dimensions of data quality. Contextual aspects of data quality are context specific measures that are subjective in nature, including concepts like value-added, believability, and relevance. […] Intrinsic aspects of data quality are more concrete in nature, and include four main dimensions: accuracy, timeliness, consistency, and completeness […] From our perspective, many of the contextual and intrinsic aspects of data quality are related to the veracity and value of the data. That said, big data presents new challenges in conceptualizing, evaluating, and monitoring data quality.”

The application of SPC methods to big data is similar in many ways to the application of SPC methods to regular data. However, many of the challenges inherent to properly studying and framing a problem can be more difficult in the presence of massive amounts of data. […] it is important to note that building the model is not the end-game. The actual use of the analysis in practice is the goal. Thus, some consideration needs to be given to the actual implementation of the statistical surveillance applications. This brings us to another important challenge, that of the complexity of many big data applications. SPC applications have a tradition of back of the napkin methods. The custom within SPC practice is the use of simple methods that are easy to explain like the Shewhart control chart. These are often the best methods to use to gain credibility because they are easy to understand and easy to explain to a non-statistical audience. However, big data often does not lend itself to easy-to-compute or easy-to-explain methods. While a control chart based on a neural net may work well, it may be so difficult to understand and explain that it may be abandoned for inferior, yet simpler methods. Thus, it is important to consider the dissemination and deployment of advanced analytical methods in order for them to be effectively used in practice. […] Another challenge in monitoring high dimensional data sets is the fact that not all of the monitored variables are likely to shift at the same time; thus, some method is necessary to identify the process variables that have changed. In high dimensional data sets, the decomposition methods used with multivariate control charts can become very computationally expensive. Several authors have considered variable selection methods combined with control charts to quickly detect process changes in a variety of practical scenarios including fault detection, multistage processes, and profile monitoring. […] All of these methods based on variable selection techniques are based on the idea of monitoring subsets of potentially faulty variables. […] Some variable reduction methods are needed to better identify shifts. We believe that further work in the areas combining variable selection methods and surveillance are important for quickly and efficiently diagnosing changes in high-dimensional data.

“A multiple stream process (MSP) is a process that generates several streams of output. From the statistical process control standpoint, the quality variable and its specifications are the same in all streams. A classical example is a filling process such as the ones found in beverage, cosmetics, pharmaceutical and chemical industries, where a filler machine may have many heads. […] Although multiple-stream processes are found very frequently in industry, the literature on schemes for the statistical control of such kind of processes is far from abundant. This paper presents a survey of the research on this topic. […] The first specific techniques for the statistical control of MSPs are the group control charts (GCCs) […] Clearly the chief motivation for these charts was to avoid the proliferation of control charts that would arise if every stream were controlled with a separate pair of charts (one for location and other for spread). Assuming the in-control distribution of the quality variable to be the same in all streams (an assumption which is sometimes too restrictive), the control limits should be the same for every stream. So, the basic idea is to build only one chart (or a pair of charts) with the information from all streams.”

“The GCC will work well if the values of the quality variable in the different streams are independent and identically distributed, that is, if there is no cross-correlation between streams. However, such an assumption is often unrealistic. In many real multiple-stream processes, the value of the observed quality variable is typically better described as the sum of two components: a common component (let’s refer to it as “mean level”), exhibiting variation that affects all streams in the same way, and the individual component of each stream, which corresponds to the difference between the stream observation and the common mean level. […] [T]he presence of the mean level component leads to reduced sensitivity of Boyd’s GCC to shifts in the individual component of a stream if the variance […] of the mean level is large with respect to the variance […] of the individual stream components. Moreover, the GCC is a Shewhart-type chart; if the data exhibit autocorrelation, the traditional form of estimating the process standard deviation (for establishing the control limits) based on the average range or average standard deviation of individual samples (even with the Bonferroni or Dunn-Sidak correction) will result in too frequent false alarms, due to the underestimation of the process total variance. […] [I]in the converse situation […] the GCC will have little sensitivity to causes that affect all streams — at least, less sensitivity than would have a chart on the average of the measurements across all streams, since this one would have tighter limits than the GCC. […] Therefore, to monitor MSPs with the two components described, Mortell and Runger (1995) proposed using two control charts: First, a chart for the grand average between streams, to monitor the mean level. […] For monitoring the individual stream components, they proposed using a special range chart (Rt chart), whose statistic is the range between streams, that is, the difference between the largest stream average and the smallest stream average […] the authors commented that both the chart on the average of all streams and the Rt chart can be used even when at each sampling time only a subset of the streams are sampled (provided that the number of streams sampled remains constant). The subset can be varied periodically or even chosen at random. […] it is common in practice to measure only a subset of streams at each sampling time, especially when the number of streams is large. […] Although almost the totality of Mortell and Runger’s paper is about the monitoring of the individual streams, the importance of the chart on the average of all streams for monitoring the mean level of the process cannot be overemphasized.”

“Epprecht and Barros (2013) studied a filling process application where the stream variances were similar, but the stream means differed, wandered, changed from day to day, were very difficult to adjust, and the production runs were too short to enable good estimation of the parameters of the individual streams. The solution adopted to control the process was to adjust the target above the nominal level to compensate for the variation between streams, as a function of the lower specification limit, of the desired false-alarm rate and of a point (shift, power) arbitrarily selected. This would be a MSP version of “acceptance control charts” (Montgomery 2012, Sect. 10.2) if taking samples with more than one observation per stream [is] feasible.”

Most research works consider a small to moderate number of streams. Some processes may have hundreds of streams, and in this case the issue of how to control the false-alarm rate while keeping enough detection power […] becomes a real problem. […] Real multiple-stream processes can be very ill-behaved. The author of this paper has seen a plant with six 20-stream filling processes in which the stream levels had different means and variances and could not be adjusted separately (one single pump and 20 hoses). For many real cases with particular twists like this one, it happens that no previous solution in the literature is applicable. […] The appropriateness and efficiency of [different monitoring methods] depends on the dynamic behaviour of the process over time, on the degree of cross-correlation between streams, on the ratio between the variabilities of the individual streams and of the common component (note that these three factors are interrelated), on the type and size of shifts that are likely and/or relevant to detect, on the ease or difficulty to adjust all streams in the same target, on the process capability, on the number of streams, on the feasibility of taking samples of more than one observation per stream at each sampling time (or even the feasibility of taking one observation of every stream at each sampling time!), on the length of the production runs, and so on. So, the first problem in a practical application is to characterize the process and select the appropriate monitoring scheme (or to adapt one, or to develop a new one). This analysis may not be trivial for the average practitioner in industry. […] Jirasettapong and Rojanarowan (2011) is the only work I have found on the issue of selecting the most suitable monitoring scheme for an MSP. It considers only a limited number of alternative schemes and a few aspects of the problem. More comprehensive analyses are needed.”

June 27, 2018 Posted by | Books, Data, Engineering, Statistics | Leave a comment

Oceans (II)

In this post I have added some more observations from the book and some more links related to the book‘s coverage.

“Almost all the surface waves we observe are generated by wind stress, acting either locally or far out to sea. Although the wave crests appear to move forwards with the wind, this does not occur. Mechanical energy, created by the original disturbance that caused the wave, travels through the ocean at the speed of the wave, whereas water does not. Individual molecules of water simply move back and forth, up and down, in a generally circular motion. […] The greater the wind force, the bigger the wave, the more energy stored within its bulk, and the more energy released when it eventually breaks. The amount of energy is enormous. Over long periods of time, whole coastlines retreat before the pounding waves – cliffs topple, rocks are worn to pebbles, pebbles to sand, and so on. Individual storm waves can exert instantaneous pressures of up to 30,000 kilograms […] per square metre. […] The rate at which energy is transferred across the ocean is the same as the velocity of the wave. […] waves typically travel at speeds of 30-40 kilometres per hour, and […] waves with a greater wavelength will travel faster than those with a shorter wavelength. […] With increasing wind speed and duration over which the wind blows, the wave height, period, and length all increase. The distance over which the wind blows is known as fetch, and is critical in influencing the growth of waves — the greater the area of ocean over which a storm blows, then the larger and more powerful the waves generated. The three stages in wave development are known as sea, swell, and surf. […] The ocean is highly efficient at transmitting energy. Water offers so little resistance to the small orbital motion of water particles in waves that individual wave trains may continue for thousands of kilometres. […] When the wave train encounters shallow water — say 50 metres for a 100-metre wavelength — the waves first feel the bottom and begin to slow down in response to frictional resistance. Wavelength decreases, the crests bunch closer together, and wave height increases until the wave becomes unstable and topples forwards as surf. […] Very often, waves approach obliquely to the coast and set up a significant transfer of water and sediment along the shoreline. The long-shore currents so developed can be very powerful, removing beach sand and building out spits and bars across the mouths of estuaries.” (People who’re interested in knowing more about these topics will probably enjoy Fredric Raichlen’s book on these topics – I did, US.)

“Wind is the principal force that drives surface currents, but the pattern of circulation results from a more complex interaction of wind drag, pressure gradients, and Coriolis deflection. Wind drag is a very inefficient process by which the momentum of moving air molecules is transmitted to water molecules at the ocean surface setting them in motion. The speed of water molecules (the current), initially in the direction of the wind, is only about 3–4 per cent of the wind speed. This means that a wind blowing constantly over a period of time at 50 kilometres per hour will produce a water current of about 1 knot (2 kilometres per hour). […] Although the movement of wind may seem random, changing from one day to the next, surface winds actually blow in a very regular pattern on a planetary scale. The subtropics are known for the trade winds with their strong easterly component, and the mid-latitudes for persistent westerlies. Wind drag by such large-scale wind systems sets the ocean waters in motion. The trade winds produce a pair of equatorial currents moving to the west in each ocean, while the westerlies drive a belt of currents that flow to the east at mid-latitudes in both hemispheres. […] Deflection by the Coriolis force and ultimately by the position of the continents creates very large oval-shaped gyres in each ocean.”

“The control exerted by the oceans is an integral and essential part of the global climate system. […] The oceans are one of the principal long-term stores on Earth for carbon and carbon dioxide […] The oceans are like a gigantic sponge holding fifty times more carbon dioxide than the atmosphere […] the sea surface acts as a two-way control valve for gas transfer, which opens and closes in response to two key properties – gas concentration and ocean stirring. First, the difference in gas concentration between the air and sea controls the direction and rate of gas exchange. Gas concentration in water depends on temperature—cold water dissolves more carbon dioxide than warm water, and on biological processes—such as photosynthesis and respiration by microscopic plants, animals, and bacteria that make up the plankton. These transfer processes affect all gases […]. Second, the strength of the ocean-stirring process, caused by wind and foaming waves, affects the ease with which gases are absorbed at the surface. More gas is absorbed during stormy weather and, once dissolved, is quickly mixed downwards by water turbulence. […] The transfer of heat, moisture, and other gases between the ocean and atmosphere drives small-scale oscillations in climate. The El Niño Southern Oscillation (ENSO) is the best known, causing 3–7-year climate cycles driven by the interaction of sea-surface temperature and trade winds along the equatorial Pacific. The effects are worldwide in their impact through a process of atmospheric teleconnection — causing floods in Europe and North America, monsoon failure and severe drought in India, South East Asia, and Australia, as well as decimation of the anchovy fishing industry off Peru.”

“Earth’s climate has not always been as it is today […] About 100 million years ago, for example, palm trees and crocodiles lived as far north as 80°N – the equivalent of Arctic Canada or northern Greenland today. […] Most of the geological past has enjoyed warm conditions. These have been interrupted at irregular intervals by cold and glacial climates of altogether shorter duration […][,] the last [of them] beginning around 3 million years ago. We are still in the grip of this last icehouse state, although in one of its relatively brief interglacial phases. […] Sea level has varied in the past in close consort with climate change […]. Around twenty-five thousand years ago, at the height of the last Ice Age, the global sea level was 120 metres lower than today. Huge tracts of the continental shelves that rim today’s landmasses were exposed. […] Further back in time, 80 million years ago, the sea level was around 250–350 metres higher than today, so that 82 per cent of the planet was ocean and only 18 per cent remained as dry land. Such changes have been the norm throughout geological history and entirely the result of natural causes.”

“Most of the solar energy absorbed by seawater is converted directly to heat, and water temperature is vital for the distribution and activity of life in the oceans. Whereas mean temperature ranges from 0 to 40 degrees Celsius, 90 per cent of the oceans are permanently below 5°C. Most marine animals are ectotherms (cold-blooded), which means that they obtain their body heat from their surroundings. They generally have narrow tolerance limits and are restricted to particular latitudinal belts or water depths. Marine mammals and birds are endotherms (warm-blooded), which means that their metabolism generates heat internally thereby allowing the organism to maintain constant body temperature. They can tolerate a much wider range of external conditions. Coping with the extreme (hydrostatic) pressure exerted at depth within the ocean is a challenge. For every 30 metres of water, the pressure increases by 3 atmospheres – roughly equivalent to the weight of an elephant.”

“There are at least 6000 different species of diatom. […] An average litre of surface water from the ocean contains over half a million diatoms and other unicellular phytoplankton and many thousands of zooplankton.”

“Several different styles of movement are used by marine organisms. These include floating, swimming, jet propulsion, creeping, crawling, and burrowing. […] The particular physical properties of water that most affect movement are density, viscosity, and buoyancy. Seawater is about 800 times denser than air and nearly 100 times more viscous. Consequently there is much more resistance on movement than on land […] Most large marine animals, including all fishes and mammals, have adopted some form of active swimming […]. Swimming efficiency in fishes has been achieved by minimizing the three types of drag resistance created by friction, turbulence, and body form. To reduce surface friction, the body must be smooth and rounded like a sphere. The scales of most fish are also covered with slime as further lubrication. To reduce form drag, the cross-sectional area of the body should be minimal — a pencil shape is ideal. To reduce the turbulent drag as water flows around the moving body, a rounded front end and tapered rear is required. […] Fins play a versatile role in the movement of a fish. There are several types including dorsal fins along the back, caudal or tail fins, and anal fins on the belly just behind the anus. Operating together, the beating fins provide stability and steering, forwards and reverse propulsion, and braking. They also help determine whether the motion is up or down, forwards or backwards.”

Links:

Rip current.
Rogue wave. Agulhas Current. Kuroshio Current.
Tsunami.
Tide. Tidal range.
Geostrophic current.
Ekman Spiral. Ekman transport. Upwelling.
Global thermohaline circulation system. Antarctic bottom water. North Atlantic Deep Water.
Rio Grande Rise.
Denmark Strait. Denmark Strait cataract (/waterfall?).
Atmospheric circulation. Jet streams.
Monsoon.
Cyclone. Tropical cyclone.
Ozone layer. Ozone depletion.
Milankovitch cycles.
Little Ice Age.
Oxygen Isotope Stratigraphy of the Oceans.
Contourite.
Earliest known life forms. Cyanobacteria. Prokaryote. Eukaryote. Multicellular organism. Microbial mat. Ediacaran. Cambrian explosion. Pikaia. Vertebrate. Major extinction events. Permian–Triassic extinction event. (The author seems to disagree with the authors of this article about potential causes, in particular in so far as they relate to the formation of Pangaea – as I felt uncertain about the accuracy of the claims made in the book I decided against covering this topic in this post, even though I find it interesting).
Tethys Ocean.
Plesiosauria. Pliosauroidea. Ichthyosaur. Ammonoidea. Belemnites. Pachyaena. Cetacea.
Pelagic zone. Nekton. Benthic zone. Neritic zone. Oceanic zone. Bathyal zone. Hadal zone.
Phytoplankton. Silicoflagellates. Coccolithophore. Dinoflagellate. Zooplankton. Protozoa. Tintinnid. Radiolaria. Copepods. Krill. Bivalves.
Elasmobranchii.
Ampullae of Lorenzini. Lateral line.
Baleen whale. Humpback whale.
Coral reef.
Box jellyfish. Stonefish.
Horseshoe crab.
Greenland shark. Giant squid.
Hydrothermal vent. Pompeii worms.
Atlantis II Deep. Aragonite. Phosphorite. Deep sea mining. Oil platform. Methane clathrate.
Ocean thermal energy conversion. Tidal barrage.
Mariculture.
Exxon Valdez oil spill.
Bottom trawling.

June 24, 2018 Posted by | Biology, Books, Engineering, Geology, Paleontology, Physics | Leave a comment

Gastrointestinal complications of diabetes (II)

Below I have added a few more observations of interest from the last half of the book. I have also bolded a few key observations and added some links along the way to make the post easier to read for people unfamiliar with these topics.

HCC [HepatoCellular Carcinoma, US] is the most common primary malignancy of the liver and globally is the fifth most common cancer [2]. […] the United States […] has seen a threefold increase between 1975 and 2007 [3]. Chronic hepatitis C virus (HCV) accounts for about half of this increase [2]. However, 15–50 % of new cases of HCC are labeled as cryptogenic or idiopathic, which suggests that other risk factors are likely playing a role [4]. NASH [Non-alcoholic steatohepatitis, US] has been proposed as the underlying cause of most cases of cryptogenic cirrhosis. […] A large proportion of cryptogenic cirrhosis […] likely represents end-stage NASH. […] In a large systematic review published in 2012, NAFLD or NASH cohorts with few or no cirrhosis cases demonstrated a minimal HCC risk with cumulative HCC mortality between 0 % and 3 % over study periods of up to two decades [8]. In contrast, consistently increased risk was observed in NASH-cirrhosis cohorts with cumulative incidence between 2.4 % over 7 years and 12.8 % over 3 years [8]. The risk of HCC was substantially lower among patients with NASH than in patients with viral hepatitis [8]. However, given the high and increasing prevalence of NAFLD, even a small increase in risk of HCC has the potential to transform into a huge case burden of HCC. […] Large population-based cohort studies from Europe have demonstrated a 1.86-fold to fourfold increase in risk of HCC among patients with diabetes [12]. Obesity, which is well established as a significant risk factor for the development of various malignancies, is associated with a 1.5-fold to fourfold increased risk for development of HCC [13]. Therefore, the excess risk of HCC in NAFLD is explained by both the increased risk for NAFLD itself with subsequent progression to NASH and the independent carcinogenic potential of diabetes and obesity [11]. […] In contrast to patients with HCC from other causes, patients with NAFLD-related HCC tend to be older and have more metabolic comorbidities but less severe liver dysfunction […] The exact mechanisms responsible for the development of HCC in NASH remain unclear.”

Patients with diabetes have an increased risk of gallstone disease, which includes gallstones, cholecystitis, or gallbladder cancer; the magnitude of the increased risk has varied across studies [22]. […] A recent systematic review and meta-analysis of studies evaluating the risk of gallstone disease estimated that a diagnosis of diabetes appears to increase the relative risk of gallstone disease by 56 % [22]. Intuitively, it would seem reasonable to attribute this to common risk factors for diabetes and gallstone disease (e.g., obesity, hyperlipidemia). However, adjustment for body mass index (BMI) in a number of studies included in the meta-analysis indicated diabetes had an independent effect on the risk of gallstone disease; it has been speculated that this is related to impaired gallbladder motility as part of diabetes-related visceral neuropathy [22]. […] A systematic review and meta-analysis suggests that both men and women with type 2 diabetes have an increased risk of gallbladder cancer (summary RR = 1.56, 95 % CI, 1.36–1.79), independent of smoking, BMI, and a history of gallstones [25]. […] While the relative risk of gallbladder cancer is increased in patients with type 2 diabetes, the absolute risk remains low […], varying from approximately 1.5 per 100,000 in North America to 25 per 100,000 in South America and Northern India [26]. […] There is a strong relationship between diabetes and hepatobiliary diseases […] Not surprisingly, autoimmune-based liver disease involving the biliary tree (i.e., primary biliary cirrhosis [PBC] and primary sclerosing cholangitis [PSC]) has been described in patients with type 1 diabetes. […] The prevalence of type 1 diabetes in patients with PSC is 4 %, and the RR of type 1 diabetes in patients with PSC was 7.95 in a large patient cohort (n = 678) [33, 34]. […] Although the relationship may not be intuitive, diabetes is intimately connected with a variety of hepatobiliary conditions […] Diabetes is often associated with more frequent adverse outcomes and should be managed aggressively.”

Impaired glucose tolerance is seen in 60 % of patients with cirrhosis [1]. Overt diabetes is seen in 20 % of patients with cirrhosis. However, it is important to note that there are two distinct types of diabetes seen with chronic liver disease. Patients can either have preexisting diabetes and later go on to develop progressive liver disease or develop diabetes as a result of cirrhosis. The latter is an entity which is sometimes referred to as “hepatogenous” diabetes. […] A recently published registry study from the UK […] demonstrated that patients with diabetes were more likely to be hospitalized with a chronic liver disease than nondiabetic patients [5]. […] type 2 diabetes was associated with an increased incidence of hospitalizations with alcoholic liver disease (RR 1.38 in men, RR 1.57 in women), nonalcoholic fatty liver disease (RR 3.03 in men, RR 5.11 in women), autoimmune liver disease (RR 1.50 in men, RR 1.25 in women), hemochromatosis (RR 1.67 in men, RR 1.60 in women), and hepatocellular carcinoma (RR 3.36 in men, RR 3.55 in women) [5, 6]. Diabetes has also been shown to affect liver disease complications. Diabetes is associated with events of hepatic decompensation such as development of ascites [7], variceal bleeding [8], and hepatic encephalopathy [9]. […] Cirrhosis is an important but under-recognized cause of mortality among patients with diabetes. In a population-based study involving nearly 7,200 patients that investigated the causes of death in patients with type 2 diabetes, chronic liver disease, and cirrhosis accounted for 4.4 % [14].”

“On average, 51 % of patients with type 1 diabetes mellitus and 35 % of patients with type 2 diabetes mellitus demonstrate pancreatic exocrine insufficiency (PEI) on fecal elastase testing where PEI is defined as fecal elastase less than 200 μg/g [17]. In a study of 1,000 patients with diabetes, including 697 with type 2 diabetes, 28.5 % of patients with type 1 and 19.9 % of patients with type 2 diabetes had severe PEI as defined by fecal elastase less than 100 μg/g [18]. […] However, there is a wide range of prevalence of PEI in these studies […] Given wide-ranging estimates, it is difficult to determine the true prevalence of PEI in patients with diabetes, especially as it translates to steatorrhea and maldigestion. […] Changes in gross and histological pancreatic morphology frequently accompany diabetes mellitus and may be a plausible link between diabetes and chronic pancreatitis. Pancreatic atrophy is often seen in autopsy studies of diabetes patients as well as with ultrasonography, computed tomography, and magnetic resonance imaging (MRI) [22–24]. Morphological changes of the pancreas in diabetes may be partially explained by the lack of trophic effect of insulin on acinar tissue. Residual exocrine function correlates well with residual beta-cell function in type 1 diabetes mellitus [25]. Yet, because not every patient with type 1 diabetes has pancreatic exocrine insufficiency, trophic action of insulin must not be the only factor. Indeed, as much of the close regulation of pancreatic exocrine function is carried out by neurohormonal mediators, diabetic neuropathy may also play a role in exocrine insufficiency in diabetics [26]. […] Though the true prevalence of PEI arising from diabetes is not definitively known, PEI leading to diabetes mellitus, termed type 3c diabetes (T3cDM) [27], appears to be less common and accounts for 5–10 % of diabetic populations [28]. A T3cDM diagnosis is made in the absence of type 1 diabetes autoimmune markers and in the setting of imaging and laboratory evidence of PEI [29]. Management of T3cDM has not been well studied, given large trials have excluded this subset of patients. […] Without dedicated clinical trials, treatment for type 3c diabetes is not standardized and commonly reflects methods used for type 2 diabetes.”

“Diabetes has been associated with an increased risk of cancer. In a Swedish population study, 24 cancer types were found to have an increased incidence among those with type 2 diabetes. Pancreatic cancer had the highest standardized incidence ratio of 2.98 (observed/expected cancer cases) compared to other cancer sites [31]. The three cell types found in the normal pancreas include acinar, ductal, and islet cells. Acinar cells comprise a majority of the organ volume (80 %), but greater than 85 % of malignant lesions arise from the ductal structures resulting in adenocarcinoma. […] According to the Surveillance, Epidemiology, and End Results (SEER) Program, pancreatic cancer is the twelfth most common cancer and the second most common gastrointestinal type behind colorectal cancer [32]. […] pancreatic cancer represents 3 % of all new cancer cases within the United States. Given the poor long-term survival rates, incidence and prevalence of the pancreatic cancer are similar. […] a majority of those with pancreatic cancer present with metastatic disease (53 %) […]. Males are affected more than females, and the median age at time of diagnosis is 71. […] Meta-analyses have demonstrated an increased risk of pancreatic cancer in those with diabetes […] [However] diabetes may be a result of pancreatic cancer as opposed to pancreatic cancer being a result of diabetes. […] Risk of pancreatic cancer does not increase as the duration of diabetes increases. Given the lack of cost-effective, noninvasive, and sensitive screening tests for pancreatic cancer, population-wide screening for pancreatic cancer in those with diabetes is prohibitive.”

June 23, 2018 Posted by | Books, Cancer/oncology, Diabetes, Epidemiology, Gastroenterology | Leave a comment

Oceans (I)

I read this book quite some time ago, but back when I did I never blogged it; instead I just added a brief review on goodreads. I remember that the main reason why I decided against blogging it shortly after I’d read it was that the coverage overlapped a great deal with Mladenov’s marine biology text, which I had at that time just read and actually did blog in some detail. I figured if I wanted to blog this book as well I would be well-advised to wait a while, so that I’d at least have forget some of the stuff first – that way blogging the book might end up serving as a review of stuff I’d forgot, rather than as a review of stuff that would still be fresh in my memory and so wouldn’t really be worth reviewing anyway. So now here we are a few months later, and I have come to think it might be a good idea to blog the book.

Below I have added some quotes from the first half of the book and some links to topics/people/etc. covered.

“Several methods now exist for calculating the rate of plate motion. Most reliable for present-day plate movement are direct observations made using satellites and laser technology. These show that the Atlantic Ocean is growing wider at a rate of between 2 and 4 centimetres per year (about the rate at which fingernails grow), the Indian Ocean is attempting to grow at a similar rate but is being severely hampered by surrounding plate collisions, while the fastest spreading centre is the East Pacific Rise along which ocean crust is being created at rates of around 17 centimetres per year (the rate at which hair grows). […] The Nazca plate has been plunging beneath South America for at least 200 million years – the imposing Andes, the longest mountain chain on Earth, is the result. […] By around 120 million years ago, South America and Africa began to drift apart and the South Atlantic was born. […] sea levels rose higher than at any time during the past billion years, perhaps as much as 350 metres higher than today. Only 18 per cent of the globe was dry land — 82 per cent was under water. These excessively high sea levels were the result of increased spreading activity — new oceans, new ridges, and faster spreading rates all meant that the mid-ocean ridge systems collectively displaced a greater volume of water than ever before. Global warming was far more extreme than today. Temperatures in the ocean rose to around 30°C at the equator and as much as 14°C at the poles. Ocean circulation was very sluggish.”

“The land–ocean boundary is known as the shoreline. Seaward of this, all continents are surrounded by a broad, flat continental shelf, typically 10–100 kilometres wide, which slopes very gently (less than one-tenth of a degree) to the shelf edge at a water depth of around 100 metres. Beyond this the continental slope plunges to the deep-ocean floor. The slope is from tens to a few hundred kilometres wide and with a mostly gentle gradient of 3–8 degrees, but locally steeper where it is affected by faulting. The base of slope abuts the abyssal plain — flat, almost featureless expanses between 4 and 6 kilometres deep. The oceans are compartmentalized into abyssal basins separated by submarine mountain ranges and plateaus, which are the result of submarine volcanic outpourings. Those parts of the Earth that are formed of ocean crust are relatively lower, because they are made up of denser rocks — basalts. Those formed of less dense rocks (granites) of the continental crust are relatively higher. Seawater fills in the deeper parts, the ocean basins, to an average depth of around 4 kilometres. In fact, some parts are shallower because the ocean crust is new and still warm — these are the mid-ocean ridges at around 2.5 kilometres — whereas older, cooler crust drags the seafloor down to a depth of over 6 kilometres. […] The seafloor is almost entirely covered with sediment. In places, such as on the flanks of mid-ocean ridges, it is no more than a thin veneer. Elsewhere, along stable continental margins or beneath major deltas where deposition has persisted for millions of years, the accumulated thickness can exceed 15 kilometres. These areas are known as sedimentary basins“.

“The super-efficiency of water as a solvent is due to an asymmetrical bonding between hydrogen and oxygen atoms. The resultant water molecule has an angular or kinked shape with weakly charged positive and negative ends, rather like magnetic poles. This polar structure is especially significant when water comes into contact with substances whose elements are held together by the attraction of opposite electrical charges. Such ionic bonding is typical of many salts, such as sodium chloride (common salt) in which a positive sodium ion is attracted to a negative chloride ion. Water molecules infiltrate the solid compound, the positive hydrogen end being attracted to the chloride and the negative oxygen end to the sodium, surrounding and then isolating the individual ions, thereby disaggregating the solid [I should mention that if you’re interested in knowing (much) more this topic, and closely related topics, this book covers these things in great detail – US]. An apparently simple process, but extremely effective. […] Water is a super-solvent, absorbing gases from the atmosphere and extracting salts from the land. About 3 billion tonnes of dissolved chemicals are delivered by rivers to the oceans each year, yet their concentration in seawater has remained much the same for at least several hundreds of millions of years. Some elements remain in seawater for 100 million years, others for only a few hundred, but all are eventually cycled through the rocks. The oceans act as a chemical filter and buffer for planet Earth, control the distribution of temperature, and moderate climate. Inestimable numbers of calories of heat energy are transferred every second from the equator to the poles in ocean currents. But, the ocean configuration also insulates Antarctica and allows the build-up of over 4000 metres of ice and snow above the South Pole. […] Over many aeons, the oceans slowly accumulated dissolved chemical ions (and complex ions) of almost every element present in the crust and atmosphere. Outgassing from the mantle from volcanoes and vents along the mid-ocean ridges contributed a variety of other elements […] The composition of the first seas was mostly one of freshwater together with some dissolved gases. Today, however, the world ocean contains over 5 trillion tonnes of dissolved salts, and nearly 100 different chemical elements […] If the oceans’ water evaporated completely, the dried residue of salts would be equivalent to a 45-metre-thick layer over the entire planet.”

“The average time a single molecule of water remains in any one reservoir varies enormously. It may survive only one night as dew, up to a week in the atmosphere or as part of an organism, two weeks in rivers, and up to a year or more in soils and wetlands. Residence times in the oceans are generally over 4000 years, and water may remain in ice caps for tens of thousands of years. Although the ocean appears to be in a steady state, in which both the relative proportion and amounts of dissolved elements per unit volume are nearly constant, this is achieved by a process of chemical cycles and sinks. The input of elements from mantle outgassing and continental runoff must be exactly balanced by their removal from the oceans into temporary or permanent sinks. The principal sink is the sediment and the principal agent removing ions from solution is biological. […] The residence times of different elements vary enormously from tens of millions of years for chloride and sodium, to a few hundred years only for manganese, aluminium, and iron. […] individual water molecules have cycled through the atmosphere (or mantle) and returned to the seas more than a million times since the world ocean formed.”

“Because of its polar structure and hydrogen bonding between individual molecules, water has both a high capacity for storing large amounts of heat and one of the highest specific heat values of all known substances. This means that water can absorb (or release) large amounts of heat energy while changing relatively little in temperature. Beach sand, by contrast, has a specific heat five times lower than water, which explains why, on sunny days, beaches soon become too hot to stand on with bare feet while the sea remains pleasantly cool. Solar radiation is the dominant source of heat energy for the ocean and for the Earth as a whole. The differential in solar input with latitude is the main driver for atmospheric winds and ocean currents. Both winds and especially currents are the prime means of mitigating the polar–tropical heat imbalance, so that the polar oceans do not freeze solid, nor the equatorial oceans gently simmer. For example, the Gulf Stream transports some 550 trillion calories from the Caribbean Sea across the North Atlantic each second, and so moderates the climate of north-western Europe.”

“[W]hy is [the sea] mostly blue? The sunlight incident on the sea has a full spectrum of wavelengths, including the rainbow of colours that make up the visible spectrum […] The longer wavelengths (red) and very short (ultraviolet) are preferentially absorbed by water, rapidly leaving near-monochromatic blue light to penetrate furthest before it too is absorbed. The dominant hue that is backscattered, therefore, is blue. In coastal waters, suspended sediment and dissolved organic debris absorb additional short wavelengths (blue) resulting in a greener hue. […] The speed of sound in seawater is about 1500 metres per second, almost five times that in air. It is even faster where the water is denser, warmer, or more salty and shows a slow but steady increase with depth (related to increasing water pressure).”

“From top to bottom, the ocean is organized into layers, in which the physical and chemical properties of the ocean – salinity, temperature, density, and light penetration – show strong vertical segregation. […] Almost all properties of the ocean vary in some way with depth. Light penetration is attenuated by absorption and scattering, giving an upper photic and lower aphotic zone, with a more or less well-defined twilight region in between. Absorption of incoming solar energy also preferentially heats the surface waters, although with marked variations between latitudes and seasons. This results in a warm surface layer, a transition layer (the thermocline) through which the temperature decreases rapidly with depth, and a cold deep homogeneous zone reaching to the ocean floor. Exactly the same broad three-fold layering is true for salinity, except that salinity increases with depth — through the halocline. The density of seawater is controlled by its temperature, salinity, and pressure, such that colder, saltier, and deeper waters are all more dense. A rapid density change, known as the pycnocline, is therefore found at approximately the same depth as the thermocline and halocline. This varies from about 10 to 500 metres, and is often completely absent at the highest latitudes. Winds and waves thoroughly stir and mix the upper layers of the ocean, even destroying the layered structure during major storms, but barely touch the more stable, deep waters.”

Links:

Arvid Pardo. Law of the Sea Convention.
Polynesians.
Ocean exploration timeline (a different timeline is presented in the book, but there’s some overlap). Age of Discovery. Vasco da Gama. Christopher Columbus. John Cabot. Amerigo Vespucci. Ferdinand Magellan. Luigi Marsigli. James Cook.
HMS Beagle. HMS Challenger. Challenger expedition.
Deep Sea Drilling Project. Integrated Ocean Drilling Program. Joides resolution.
World Ocean.
Geological history of Earth (this article of course covers much more than is covered in the book, but the book does cover some highlights). Plate tectonics. Lithosphere. Asthenosphere. Convection. Global mid-ocean ridge system.
Pillow lava. Hydrothermal vent. Hot spring.
Ophiolite.
Mohorovičić discontinuity.
Mid-Atlantic Ridge. Subduction zone. Ring of Fire.
Pluton. Nappe. Mélange. Transform fault. Strike-slip fault. San Andreas fault.
Paleoceanography. Tethys Ocean. Laurasia. Gondwana.
Oceanic anoxic event. Black shale.
Seabed.
Bengal Fan.
Fracture zone.
Seamount.
Terrigenous sediment. Biogenic and chemogenic sediment. Halite. Gypsum.
Carbonate compensation depth.
Laurentian fan.
Deep-water sediment waves. Submarine landslide. Turbidity current.
Water cycle.
Ocean acidification.
Timing and Climatic Consequences ofthe Opening of Drake Passage. The Opening of the Tasmanian Gateway Drove Global Cenozoic Paleoclimatic and Paleoceanographic Changes (report)Antarctic Circumpolar Current.
SOFAR channel.
Bathymetry.

June 18, 2018 Posted by | Books, Chemistry, Geology, Papers, Physics | Leave a comment

Gastrointestinal complications of diabetes (I)

I really liked this book. It covered a lot of stuff also covered in Horowitz & Samsom’s excellent book on these topics, but it’s shorter and so probably easier for the relevant target group to justify reading. I recommend the book if you want to know more about these topics but don’t quite feel like reading a long textbook on these topics.

Below I’ve added some observations from the first half of the book. In the quotes below I’ve added some links and highlighted some key observations by the use of bold text.

Gastrointestinal (GI) symptoms occur more commonly in patients with diabetes than in the general population [2]. […] GI symptoms such as nausea, abdominal pain, bloating, diarrhea, constipation, and delayed gastric emptying occur in almost 75 % of patients with diabetes [3]. A majority of patients with GI symptoms stay undiagnosed or undertreated due to a lack of awareness of these complications among clinicians. […] Diabetes can affect the entire GI tract from the oral cavity and esophagus to the large bowel and anorectal region, either in isolation or in a combination. The extent and the severity of the presenting symptoms may vary widely depending upon which part of the GI tract is involved. In patients with long-term type 1 DM, upper GI symptoms seem to be particularly common [4]. Of the different types […] gastroparesis seems to be the most well known and most serious complication, occurring in about 50 % of patients with diabetes-related GI complications [5].”

The enteric nervous system (ENS) is an independent network of neurons and glial cells that spread from the esophagus up to the internal anal sphincter. […] the ENS regulates GI tract functions including motility, secretion, and participation in immune regulation [12, 13]. GI complications and their symptoms in patients with diabetes arise secondary to both abnormalities of gastric function (sensory and motor modality), as well as impairment of GI hormonal secretion [14], but these abnormalities are complex and incompletely understood. […] It has been known for a long time that diabetic autonomic neuropathy […] leads to abnormalities in the GI motility, sensation, secretion, and absorption, serving as the main pathogenic mechanism underlying GI complications. Recently, evidence has emerged to suggest that other processes might also play a role. Loss of the pacemaker interstitial cells of Cajal, impairment of the inhibitory nitric oxide-containing nerves, abnormal myenteric neurotransmission, smooth muscle dysfunction, and imbalances in the number of excitatory and inhibitory enteric neurons can drastically alter complex motor functions causing dysfunction of the enteric system [7, 11, 15, 16]. This dysfunction can further lead to the development of dysphagia and reflux esophagitis in the esophagus, gastroparesis, and dyspepsia in the stomach, pseudo-obstruction of the small intestine, and constipation, diarrhea, and incontinence in the colon. […] Compromised intestinal vascular flow arising due to ischemia and hypoxia from microvascular disease of the GI tract can also cause abdominal pain, bleeding, and mucosal dysfunction. Mitochondrial dysfunction has been implicated in the pathogenesis of gastric neuropathy. […] Another possible association between DM and the gastrointestinal tract can be infrequent autoimmune diseases associated with type I DM like autoimmune chronic pancreatitis, celiac disease (2–11 %), and autoimmune gastropathy (2 % prevalence in general population and three- to fivefold increase in patients with type 1 DM) [28, 29]. GI symptoms are often associated with the presence of other diabetic complications, especially autonomic and peripheral neuropathy [2, 30, 31]. In fact, patients with microvascular complications such as retinopathy, nephropathy, or neuropathy should be presumed to have GI abnormalities until proven otherwise. In a large cross-sectional questionnaire study of 1,101 subjects with DM, 57 % of patients reported at least one GI complication [31]. Poor glycemic control has also been found to be associated with increased severity of the upper GI symptoms. […] management of DM-induced GI complications is challenging, is generally suboptimal, and needs improvement.

Diabetes mellitus (DM) has multiple clinically important effects on the esophagus. Diabetes results in several esophageal motility disturbances, increases the risk of esophageal candidiasis, and increases the risk of Barrett’s esophagus and esophageal carcinoma. Finally, “black esophagus,” or acute esophageal necrosis, is also associated with DM. […] Esophageal dysmotility has been shown to be associated with diabetic neuropathy; however, symptomatic esophageal dysmotility is not often considered an important complication of diabetes. […] In general, the manometric effects of diabetes on the esophagus are not specific and mostly related to speed and strength of peristalsis. […] The pathological findings which amount to loss of cholinergic stimulation are consistent with the manometric findings in the esophagus, which are primarily related to slowed or weakened peristalsis. […] The association between DM and GERD is complex and conflicting. […] A recent meta-analysis suggests an overall positive association in Western countries [12]. […] The underlying pathogenesis of DM contributing to GERD is not fully elucidated, but is likely related to reduced acid clearance due to slow, weakened esophageal peristalsis. The association between DM and gastroesophageal reflux (GER) is well established, but the link between DM and GERD, which requires symptoms or esophagitis, is more complex because sensation may be blunted in diabetics with neuropathy. Asymptomatic gastroesophageal reflux (GER) confirmed by pH studies is significantly more frequent in diabetic patients than in healthy controls [13]. […] long-standing diabetics with neuropathy are at higher risk for GERD even if they have no symptoms. […] Abnormal pH and motility studies do not correlate very well with the GI symptoms of diabetics, possibly due to DM-related sensory dysfunction.”

Gastroparesis is defined as a chronic disorder characterized by delayed emptying of the stomach occurring in the absence of mechanical obstruction. It is a well-known and potentially serious complication of diabetes. […] Diabetic gastroparesis affects up to 40 % of patients with type 1 diabetes and up to 30 % of patients with type 2 diabetes [1, 2]. Diabetic gastroparesis generally affects patients with longstanding diabetes mellitus, and patients often have other diabetic complications […] For reasons that remain unclear, approximately 80 % of patients with gastroparesis are women [3]. […] In diabetes, delayed gastric emptying can often be asymptomatic. Therefore, the term gastroparesis should only be reserved for patients that have both delayed gastric emptying and upper gastrointestinal symptoms. Additionally, discordance between the pattern and type of symptoms and the magnitude of delayed gastric emptying is a well-established phenomenon. Accelerating gastric emptying may not improve symptoms, and patients can have symptomatic improvement while gastric emptying time remains unchanged. Furthermore, patients with severe symptoms can have mild delays in gastric emptying. Clinical features of gastroparesis include nausea, vomiting, bloating, abdominal pain, and malnutrition. […] Gastroparesis affects oral drug absorption and can cause hyperglycemia that is challenging to manage, in addition to unexplained hypoglycemia. […] Nutritional and caloric deficits are common in patients with gastroparesis […] Possible complications of gastroparesis include volume depletion with renal failure, malnutrition, electrolyte abnormalities, esophagitis, Mallory–Weiss tear (from vomiting), or bezoar formation. […] Unfortunately, there is a dearth of medications available to treat gastroparesis. Additionally, many of the medications used are based on older trials with small sample sizes […and some of them have really unpleasant side effects – US]. […] Gastroparesis can be associated with abdominal pain in as many as 50 % of patients with gastroparesis at tertiary care centers. There are no trials to guide the choice of agents. […] Abdominal pain […] is often difficult to treat [3]. […] In a subset of patients with diabetes [less than 10%, according to Horowitz & Samsom – US], gastric emptying can be abnormally accelerated […]. Symptoms are often difficult to distinguish from those with delayed gastric emptying. […] Worsening symptoms with a prokinetic agent can be a sign of possible accelerated emptying.”

“Diabetic enteropathy encompasses small intestinal and colorectal dysfunctions such as diarrhea, constipation, and/or fecal incontinence. It is more commonly seen in patients with long-standing diabetes, especially in those with gastroparesis. Development of diabetic enteropathy is complex and multifactorial. […] gastrointestinal symptoms and complications do not always correlate with the duration of diabetes, glycemic control, or with the presence of autonomic neuropathy, which is often assumed to be the major cause of many gastrointestinal symptoms. Other pathophysiologic processes operative in diabetic enteropathy include enteric myopathy and neuropathy; however, causes of these abnormalities are unknown [1]. […] Collectively, the effects of diabetes on several targets cause aberrations in gastrointestinal function and regulation. Loss of ICC, autonomic neuropathy, and imbalances in the number of excitatory and inhibitory enteric neurons can drastically alter complex motor functions such as peristalsis, reflexive relaxation, sphincter tone, vascular flow, and intestinal segmentation [5]. […] Diarrhea is a common complaint in DM. […] Etiologies of diarrhea in diabetes are multifactorial and include rapid intestinal transit, drug-induced diarrhea, small-intestine bacterial overgrowth, celiac disease, pancreatic exocrine insufficiency, dietary factors, anorectal dysfunction, fecal incontinence, and microscopic colitis [1]. […] It is important to differentiate whether diarrhea is caused by rapid intestinal transit vs. SIBO. […] This differentiation has key clinical implications with regard to the use of antimotility agents or antibiotics in a particular case. […] Constipation is a common problem seen with long-standing DM. It is more common than in general population, where the incidence varies from 2 % to 30 % [30]. It affects 60 % of the patients with DM and is more common than diarrhea [14]. […] There are no specific treatments for diabetes-associated constipation […] In most cases, patients are treated in the same way as those with idiopathic chronic constipation. […] Colorectal cancer is the third most common cancer in men and the second in women [33]. Individuals with type 2 DM have an increased risk of colorectal cancer when compared with their nondiabetic counterparts […] According to a recent large observational population-based cohort study, type 2 DM was associated with a 1.3-fold increased risk of colorectal cancer compared to the general population.”

Nonalcoholic fatty liver disease (NAFLD) is the main hepatic complication of obesity, insulin resistance, and diabetes and soon to become the leading cause for end-stage liver disease in the United States [1]. […] NAFLD is a spectrum of disease that ranges from steatosis (hepatic fat without significant hepatocellular injury) to nonalcoholic steatohepatitis (NASH; hepatic fat with hepatocellular injury) to advanced fibrosis and cirrhosis. As a direct consequence of the obesity epidemic, NAFLD is the most common cause of chronic liver disease, while NASH is the second leading indication for liver transplantation [1]. NAFLD prevalence is estimated at 25 % globally [2] and up to 30 % in the United States [3–5]. Roughly 30 % of individuals with NAFLD also have NASH, the progressive subtype of NAFLD. […] NASH is estimated at 22 % among patients with diabetes, compared to 5 % of the general population [4, 14]. […] Insulin resistance is strongly associated with NASH. […] Simple steatosis (also known as nonalcoholic fatty liver) is characterized by the presence of steatosis without ballooned hepatocytes (which represents hepatocyte injury) or fibrosis. Mild inflammation may be present. Simple steatosis is associated with a very low risk of progressive liver disease and liver-related mortality. […] Patients with NASH are at risk for progressive liver fibrosis and liver-related mortality, cardiovascular complications, and hepatocellular carcinoma (HCC) even in the absence of cirrhosis [26]. Liver fibrosis stage progresses at an estimated rate of one stage every 7 years [27]. Twenty percent of patients with NASH will eventually develop liver cirrhosis [9]. […] The risk of cardiovascular disease is increased across the entire NAFLD spectrum. […] Cardiovascular risk reduction should be aggressively managed in all patients.

 

June 17, 2018 Posted by | Books, Cancer/oncology, Cardiology, Diabetes, Gastroenterology, Medicine, Neurology | Leave a comment

Robotics

“This book is not about the psychology or cultural anthropology of robotics, interesting as those are. I am an engineer and roboticist, so I confine myself firmly to the technology and application of real physical robots. […] robotics is the study of the design, application, and use of robots, and that is precisely what this Very Short Introduction is about: what robots do and what roboticists do.”

The above quote is from the book‘s preface; the book is quite decent and occasionally really quite fascinating. Below I have added some sample quotes and links to topics/stuff covered in the book.

“Some of all of […] five functions – sensing, signalling, moving, intelligence, and energy, integrated into a body – are present in all robots. The actual sensors, motors, and behaviours designed into a particular robot body shape depend on the job that robot is designed to do. […] A robot is: 1. an artificial device that can sense its environment and purposefully act on or in that environment; 2. an embodied artificial intelligence; or 3. a machine that can autonomously carry out useful work. […] Many real-world robots […] are not autonomous but remotely operated by humans. […] These are also known as tele-operated robots. […] From a robot design point of view, the huge advantage of tele-operated robots is that the human in the loop provides the robot’s ‘intelligence’. One of the most difficult problems in robotics — the design of the robot’s artificial intelligence — is therefore solved, so it’s not surprising that so many real-world robots are tele-operated. The fact that tele-operated robots alleviate the problem of AI design should not fool us into making the mistake of thinking that tele-operated robots are not sophisticated — they are. […] counter-intuitively, autonomous robots are often simpler than tele-operated robots […] When roboticists talk about autonomous robots they normally mean robots that decide what to do next entirely without human intervention or control. We need to be careful here because they are not talking about true autonomy, in the sense that you or I would regard ourselves as self-determining individuals, but what I would call ‘control autonomy’. By control autonomy I mean that the robot can undertake its task, or mission, without human intervention, but that mission is still programmed or commanded by a human. In fact, there are very few robots in use in the real world that are autonomous even in this limited sense. […] It is helpful to think about a spectrum of robot autonomy, from remotely operated at one end (no autonomy) to fully autonomous at the other. We can then place robots on this spectrum according to their degree of autonomy. […] On a scale of autonomy, a robot that can react on its own in response to its sensors is highly autonomous. A robot that cannot react, perhaps because it doesn’t have any sensors, is not.”

“It is […] important to note that autonomy and intelligence are not the same thing. A robot can be autonomous but not very smart, like a robot vacuum cleaner. […] A robot vacuum cleaner has a small number of preprogrammed (i.e. instinctive) behaviours and is not capable of any kind of learning […] These are characteristics we would associate with very simple animals. […] When roboticists describe a robot as intelligent, what they mean is ‘a robot that behaves, in some limited sense, as if it were intelligent’. The words as if are important here. […] There are basically two ways in which we can make a robot behave as if it is more intelligent: 1. preprogram a larger number of (instinctive) behaviours; and/or 2. design the robot so that it can learn and therefore develop and grow its own intelligence. The first of these approaches is fine, providing that we know everything there is to know about what the robot must do and all of the situations it will have to respond to while it is working. Typically we can only do this if we design both the robot and its operational environment. […] For unstructured environments, the first approach to robot intelligence above is infeasible simply because it’s impossible to anticipate every possible situation a robot might encounter, especially if it has to interact with humans. The only solution is to design a robot so that it can learn, either from its own experience or from humans or other robots, and therefore adapt and develop its own intelligence: in effect, grow its behavioural repertoire to be able to respond appropriately to more and more situations. This brings us to the subject of learning robots […] robot learning or, more generally, ‘machine learning’ — a branch of AI — has proven to be very much harder than was expected in the early days of Artificial Intelligence.”

“Robot arms on an assembly line are typically programmed to go through a fixed sequence of moves over and over again, for instance spot-welding car body panels, or spray-painting the complete car. These robots are therefore not intelligent. In fact, they often have no exteroceptive sensors at all. […] when we see an assembly line with multiple robot arms positioned on either side along a line, we need to understand that the robots are part of an integrated automated manufacturing system, in which each robot and the line itself have to be carefully programmed in order to coordinate and choreograph the whole operation. […] An important characteristic of assembly-line robots is that they require the working environment to be designed for and around them, i.e. a structured environment. They also need that working environment to be absolutely predictable and repeatable. […] Robot arms either need to be painstakingly programmed, so that the precise movement required of each joint is worked out and coded into a set of instructions for the robot arm or, more often (and rather more easily), ‘taught’ by a human using a control pad to move its end-effector (hand) to the required positions in the robot’s workspace. The robot then memorizes the set of joint movements so that they can be replayed (over and over again). The human operator teaching the robot controls the trajectory, i.e. the path the robot arm’s end-effector follows as it moves through its 3D workspace, and a set of mathematical equations called the ‘inverse kinematics’ converts the trajectory into a set of individual joint movements. Using this approach, it is relatively easy to teach a robot arm to pick up an object and move it smoothly to somewhere else in its workspace while keeping the object level […]. However […] most real-world robot arms are unable to sense the weight of the object and automatically adjust accordingly. They are simply designed with stiff enough joints and strong enough motors that, whatever the weight of the object (providing it’s within the robot’s design limits), it can be lifted, moved, and placed with equal precision. […] The robot arm and gripper are a foundational technology in robotics. Not only are they extremely important as […] industrial assembly-line robot[s], but they have become a ‘component’ in many areas of robotics.”

Planetary rovers are tele-operated mobile robots that present the designer and operator with a number of very difficult challenges. One challenge is power: a planetary rover needs to be energetically self-sufficient for the lifetime of its mission, and must either be launched with a power source or — as in the case of the Mars rovers — fitted with solar panels capable of recharging the rover’s on-board batteries. Another challenge is dependability. Any mechanical fault is likely to mean the end of the rover’s mission, so it needs to be designed and built to exceptional standards of reliability and fail-safety, so that if parts of the rover should fail, the robot can still operate, albeit with reduced functionality. Extremes of temperature are also a problem […] But the greatest challenge is communication. With a round-trip signal delay time of twenty minutes to Mars and back, tele-operating the rover in real time is impossible. If the rover is moving and its human operator in the command centre on Earth reacts to an obstacle, it’s likely to be already too late; the robot will have hit the obstacle by the time the command signal to turn reaches the rover. An obvious answer to this problem would seem to be to give the rover a degree of autonomy so that it could, for instance, plan a path to a rock or feature of interest — while avoiding obstacles — then, when it arrives at the point of interest, call home and wait. Although path-planning algorithms capable of this level of autonomy have been well developed, the risk of a failure of the algorithm (and hence perhaps the whole mission) is deemed so high that in practice the rovers are manually tele-operated, at very low speed, with each manual manoeuvre carefully planned. When one also takes into account the fact that the Mars rovers are contactable only for a three-hour window per Martian day, a traverse of 100 metres will typically take up one day of operation at an average speed of 30 metres per hour.”

“The realization that the behaviour of an autonomous robot is an emergent property of its interactions with the world has important and far-reaching consequences for the way we design autonomous robots. […] when we design robots, and especially when we come to decide what behaviours to programme the robot’s AI with, we cannot think about the robot on its own. We must take into account every detail of the robot’s working environment. […] Like all machines, robots need power. For fixed robots, like the robot arms used for manufacture, power isn’t a problem because the robot is connected to the electrical mains supply. But for mobile robots power is a huge problem because mobile robots need to carry their energy supply around with them, with problems of both the size and weight of the batteries and, more seriously, how to recharge those batteries when they run out. For autonomous robots, the problem is acute because a robot cannot be said to be truly autonomous unless it has energy autonomy as well as computational autonomy; there seems little point in building a smart robot that ‘dies’ when its battery runs out. […] Localization is a[nother] major problem in mobile robotics; in other words, how does a robot know where it is, in 2D or 3D space. […] [One] type of robot learning is called reinforcement learning. […] it is a kind of conditioned learning. If a robot is able to try out several different behaviours, test the success or failure of each behaviour, then ‘reinforce’ the successful behaviours, it is said to have reinforcement learning. Although this sounds straightforward in principle, it is not. It assumes, first, that a robot has at least one successful behaviour in its list of behaviours to try out, and second, that it can test the benefit of each behaviour — in other words, that the behaviour has an immediate measurable reward. If a robot has to try every possible behaviour or if the rewards are delayed, then this kind of so-called ‘unsupervised’ individual robot learning is very slow.”

“A robot is described as humanoid if it has a shape or structure that to some degree mimics the human form. […] A small subset of humanoid robots […] attempt a greater degree of fidelity to the human form and appearance, and these are referred to as android. […] It is a recurring theme of this book that robot intelligence technology lags behind robot mechatronics – and nowhere is the mismatch between the two so starkly evident as it is in android robots. The problem is that if a robot looks convincingly human, then we (not unreasonably) expect it to behave like a human. For this reason whole-body android robots are, at the time of writing, disappointing. […] It is important not to overstate the case for humanoid robots. Without doubt, many potential applications of robots in human work- or living spaces would be better served by non-humanoid robots. The humanoid robot to use human tools argument doesn’t make sense if the job can be done autonomously. It would be absurd, for instance, to design a humanoid robot in order to operate a vacuum cleaner designed for humans. Similarly, if we want a driverless car, it doesn’t make sense to build a humanoid robot that sits in the driver’s seat. It seems that the case for humanoid robots is strongest when the robots are required to work alongside, learn from, and interact closely with humans. […] One of the most compelling reasons why robots should be humanoid is for those applications in which the robot has to interact with humans, work in human workspaces, and use tools or devices designed for humans.”

“…to put it bluntly, sex with a robot might not be safe. As soon as a robot has motors and moving parts, then assuring the safety of human-robot interaction becomes a difficult problem and if that interaction is intimate, the consequences of a mechanical or control systems failure could be serious.”

“All of the potential applications of humanoid robots […] have one thing in common: close interaction between human and robot. The nature of that interaction will be characterized by close proximity and communication via natural human interfaces – speech, gesture, and body language. Human and robot may or may not need to come into physical contact, but even when direct contact is not required they will still need to be within each other’s body space. It follows that robot safety, dependability, and trustworthiness are major issues for the robot designer. […] making a robot safe isn’t the same as making it trustworthy. One person trusts another if, generally speaking, that person is reliable and does what they say they will. So if I were to provide a robot that helps to look after your grandmother and I claim that it is perfectly safe — that it’s been designed to cover every risk or hazard — would you trust it? The answer is probably not. Trust in robots, just as in humans, has to be earned. […for more on these topics, see this post – US] […] trustworthiness cannot just be designed into the robot — it has to be earned by use and by experience. Consider a robot intended to fetch drinks for an elderly person. Imagine that the person calls for a glass of water. The robot then needs to fetch the drink, which may well require the robot to find a glass and fill it with water. Those tasks require sensing, dexterity, and physical manipulation, but they are problems that can be solved with current technology. The problem of trust arises when the robot brings the glass of water to the human. How does the robot give the glass to the human? If the robot has an arm so that it can hold out the glass in the same way a human would, how would the robot know when to let go? The robot clearly needs sensors in order to see and feel when the human has taken hold of the glass. The physical process of a robot handing something to a person is fraught with difficulty. Imagine, for instance, that the robot holds out its arm with the glass but the human can’t reach the glass. How does the robot decide where and how far it would be safe to bring its arm toward the person? What if the human takes hold of the glass but then the glass slips; does the robot let it fall or should it — as a human would — renew its grip on the glass? At what point would the robot decide the transaction has failed: it can’t give the glass of water to the person, or they won’t take it; perhaps they are asleep, or simply forgotten they wanted a glass of water, or confused. How does the robot sense that it should give up and perhaps call for assistance? These are difficult problems in robot cognition. Until they are solved, it’s doubtful we could trust a robot sufficiently well to do even a seemingly simple thing like handing over a glass of water.”

“The fundamental problem with Asimov’s laws of robotics, or any similar construction, is that they require the robot to make judgments. […] they assume that the robot is capable of some level of moral agency. […] No robot that we can currently build, or will build in the foreseeable future, is ‘intelligent’ enough to be able to even recognize, let alone make, these kinds of choices. […] Most roboticists agree that for the foreseeable future robots cannot be ethical, moral agents. […] precisely because, as we have seen, present-day ‘intelligent’ robots are not very intelligent, there is a danger of a gap between what robot users believe those robots to be capable of and what they are actually capable of. Given humans’ propensity to anthropomorphize and form emotional attachments to machines, there is clearly a danger that such vulnerabilities could be either unwittingly or deliberately exploited. Although robots cannot be ethical, roboticists should be.”

“In robotics research, the simulator has become an essential tool of the roboticist’s trade. The reason for this is that designing, building, and testing successive versions of real robots is both expensive and time-consuming, and if part of that work can be undertaken in the virtual rather than the real world, development times can be shortened, and the chances of a robot that works first time substantially improved. A robot simulator has three essential features. First, it must provide a virtual world. Second, it must offer a facility for creating a virtual model of the real robot. And third, it must allow the robot’s controller to be installed and ‘run’ on the virtual robot in the virtual world; the controller then determines how the robot behaves when running in the simulator. The simulator should also provide a visualization of the virtual world and simulated robots in it so that the designer can see what’s going on. […] These are difficult challenges for developers of robot simulators.”

“The next big step in miniaturization […] requires the solution of hugely difficult problems and, in all likelihood, the use of exotic approaches to design and fabrication. […] It is impossible to shrink mechanical and electrical components, or MEMS devices, in order to reduce total robot size to a few micrometres. In any event, the physics of locomotion through a fluid changes at the microscale and simply shrinking mechanical components from macro to micro — even if it were possible — would fail to address this problem. A radical approach is to leave behind conventional materials and components and move to a bioengineered approach in which natural bacteria are modified by adding artificial components. The result is a hybrid of artificial and natural (biological) components. The bacterium has many desirable properties for a microbot. By selecting a bacterium with a flagellum, we have locomotion perfectly suited to the medium. […] Another hugely desirable characteristic is that the bacteria are able to naturally scavenge for energy, thus avoiding the otherwise serious problem of powering the microbots. […] Whatever technology is used to create the microbots, huge problems would have to be overcome before a swarm of medical microbots could become a practical reality. The first is technical: how do surgeons or medical technicians reliably control and monitor the swarm while it’s working inside the body? Or, assuming we can give the microbots sufficient intelligence and autonomy (also a very difficult challenge), do we forgo precise control and human intervention altogether by giving the robots the swarm intelligence to be able to do the job, i.e. find the problem, fix it, then exit? […] these questions bring us to what would undoubtedly represent the greatest challenge: validating the swarm of medical microbots as effective, dependable, and above all safe, then gaining approval and public acceptance for its use. […] Do we treat the validation of the medical microbot swarm as an engineering problem, and attempt to apply the same kinds of methods we would use to validate safety-critical systems such as air traffic control systems? Or do we instead regard the medical microbot swarm as a drug and validate it with conventional and (by and large) trusted processes, including clinical trials, leading to approval and licensing for use? My suspicion is that we will need a new combination of both approaches.”

Links:

E-puck mobile robot.
Jacques de Vaucanson’s Digesting Duck.
Cybernetics.
Alan Turing. W. Ross Ashby. Norbert Wiener. Warren McCulloch. William Grey Walter.
Turtle (robot).
Industrial robot. Mechanical arm. Robotic arm. Robot end effector.
Automated guided vehicle.
Remotely operated vehicle. Unmanned aerial vehicle. Remotely operated underwater vehicle. Wheelbarrow (robot).
Robot-assisted surgery.
Lego Mindstorms NXT. NXT Intelligent Brick.
Biomimetic robots.
Artificial life.
Braitenberg vehicle.
Shakey the robot. Sense-Plan-Act. Rodney Brooks. A robust layered control system for a mobile robot.
Toto the robot.
Slugbot. Ecobot. Microbial fuel cell.
Scratchbot.
Simultaneous localization and mapping (SLAM).
Programming by demonstration.
Evolutionary algorithm.
NASA Robonaut. BERT 2. Kismet (robot). Jules (robot). Frubber. Uncanny valley.
AIBO. Paro.
Cronos Robot. ECCEROBOT.
Swarm robotics. S-bot mobile robot. Swarmanoid project.
Artificial neural network.
Symbrion.
Webots.
Kilobot.
Microelectromechanical systems. I-SWARM project.
ALICE (Artificial Linguistic Internet Computer Entity). BINA 48 (Breakthrough Intelligence via Neural Architecture 48).

June 15, 2018 Posted by | Books, Computer science, Engineering, Medicine | Leave a comment

Developmental Biology (I)

On goodreads I called the book “[a]n excellent introduction to the field of developmental biology” and I gave it five stars.

Below I have included some sample observations from the first third of the book or so, as well as some supplementary links.

“The major processes involved in development are: pattern formation; morphogenesis or change in form; cell differentiation by which different types of cell develop; and growth. These processes involve cell activities, which are determined by the proteins present in the cells. Genes control cell behaviour by controlling where and when proteins are synthesized, and cell behaviour provides the link between gene action and developmental processes. What a cell does is determined very largely by the proteins it contains. The hemoglobin in red blood cells enables them to transport oxygen; the cells lining the vertebrate gut secrete specialized digestive enzymes. These activities require specialized proteins […] In development we are concerned primarily with those proteins that make cells different from one another and make them carry out the activities required for development of the embryo. Developmental genes typically code for proteins involved in the regulation of cell behaviour. […] An intriguing question is how many genes out of the total genome are developmental genes – that is, genes specifically required for embryonic development. This is not easy to estimate. […] Some studies suggest that in an organism with 20,000 genes, about 10% of the genes may be directly involved in development.”

“The fate of a group of cells in the early embryo can be determined by signals from other cells. Few signals actually enter the cells. Most signals are transmitted through the space outside of cells (the extracellular space) in the form of proteins secreted by one cell and detected by another. Cells may interact directly with each other by means of molecules located on their surfaces. In both these cases, the signal is generally received by receptor proteins in the cell membrane and is subsequently relayed through other signalling proteins inside the cell to produce the cellular response, usually by turning genes on or off. This process is known as signal transduction. These pathways can be very complex. […] The complexity of the signal transduction pathway means that it can be altered as the cell develops so the same signal can have a different effect on different cells. How a cell responds to a particular signal depends on its internal state and this state can reflect the cell’s developmental history — cells have good memories. Thus, different cells can respond to the same signal in very different ways. So the same signal can be used again and again in the developing embryo. There are thus rather few signalling proteins.”

“All vertebrates, despite their many outward differences, have a similar basic body plan — the segmented backbone or vertebral column surrounding the spinal cord, with the brain at the head end enclosed in a bony or cartilaginous skull. These prominent structures mark the antero-posterior axis with the head at the anterior end. The vertebrate body also has a distinct dorso-ventral axis running from the back to the belly, with the spinal cord running along the dorsal side and the mouth defining the ventral side. The antero-posterior and dorso-ventral axes together define the left and right sides of the animal. Vertebrates have a general bilateral symmetry around the dorsal midline so that outwardly the right and left sides are mirror images of each other though some internal organs such as the heart and liver are arranged asymmetrically. How these axes are specified in the embryo is a key issue. All vertebrate embryos pass through a broadly similar set of developmental stages and the differences are partly related to how and when the axes are set up, and how the embryo is nourished. […] A quite rare but nevertheless important event before gastrulation in mammalian embryos, including humans, is the splitting of the embryo into two, and identical twins can then develop. This shows the remarkable ability of the early embryo to regulate [in this context, regulation refers to ‘the ability of an embryo to restore normal development even if some portions are removed or rearranged very early in development’ – US] and develop normally when half the normal size […] In mammals, there is no sign of axes or polarity in the fertilized egg or during early development, and it only occurs later by an as yet unknown mechanism.”

“How is left–right established? Vertebrates are bilaterally symmetric about the midline of the body for many structures, such as eyes, ears, and limbs, but most internal organs are asymmetric. In mice and humans, for example, the heart is on the left side, the right lung has more lobes than the left, the stomach and spleen lie towards the left, and the bulk of the liver is towards the right. This handedness of organs is remarkably consistent […] Specification of left and right is fundamentally different from specifying the other axes of the embryo, as left and right have meaning only after the antero-posterior and dorso-ventral axes have been established. If one of these axes were reversed, then so too would be the left–right axis and this is the reason that handedness is reversed when you look in a mirror—your dorsoventral axis is reversed, and so left becomes right and vice versa. The mechanisms by which left–right symmetry is initially broken are still not fully understood, but the subsequent cascade of events that leads to organ asymmetry is better understood. The ‘leftward’ flow of extracellular fluid across the embryonic midline by a population of ciliated cells has been shown to be critical in mouse embryos in inducing asymmetric expression of genes involved in establishing left versus right. The antero-posterior patterning of the mesoderm is most clearly seen in the differences in the somites that form vertebrae: each individual vertebra has well defined anatomical characteristics depending on its location along the axis. Patterning of the skeleton along the body axis is based on the somite cells acquiring a positional value that reflects their position along the axis and so determines their subsequent development. […] It is the Hox genes that define positional identity along the antero-posterior axis […]. The Hox genes are members of the large family of homeobox genes that are involved in many aspects of development and are the most striking example of a widespread conservation of developmental genes in animals. The name homeobox comes from their ability to bring about a homeotic transformation, converting one region into another. Most vertebrates have clusters of Hox genes on four different chromosomes. A very special feature of Hox gene expression in both insects and vertebrates is that the genes in the clusters are expressed in the developing embryo in a temporal and spatial order that reflects their order on the chromosome. Genes at one end of the cluster are expressed in the head region, while those at the other end are expressed in the tail region. This is a unique feature in development, as it is the only known case where a spatial arrangement of genes on a chromosome corresponds to a spatial pattern in the embryo. The Hox genes provide the somites and adjacent mesoderm with positional values that determine their subsequent development.”

“Many of the genes that control the development of flies are similar to those controlling development in vertebrates, and indeed in many other animals. it seems that once evolution finds a satisfactory way of developing animal bodies, it tends to use the same mechanisms and molecules over and over again with, of course, some important modifications. […] The insect body is bilaterally symmetrical and has two distinct and largely independent axes: the antero-posterior and dorso-ventral axes, which are at right angles to each other. These axes are already partly set up in the fly egg, and become fully established and patterned in the very early embryo. Along the antero-posterior axis the embryo becomes divided into a number of segments, which will become the head, thorax, and abdomen of the larva. A series of evenly spaced grooves forms more or less simultaneously and these demarcate parasegments, which later give rise to the segments of the larva and adult. Of the fourteen larval parasegments, three contribute to mouthparts of the head, three to the thoracic region, and eight to the abdomen. […] Development is initiated by a gradient of the protein Bicoid, along the axis running from anterior to posterior in the egg; this provides the positional information required for further patterning along this axis. Bicoid is a transcription factor and acts as a morphogen—a graded concentration of a molecule that switches on particular genes at different threshold concentrations, thereby initiating a new pattern of gene expression along the axis. Bicoid activates anterior expression of the gene hunchback […]. The hunchback gene is switched on only when Bicoid is present above a certain threshold concentration. The protein of the hunchback gene, in turn, is instrumental in switching on the expression of the other genes, along the antero-posterior axis. […] The dorso-ventral axis is specified by a different set of maternal genes from those that specify the anterior-posterior axis, but by a similar mechanism. […] Once each parasegment is delimited, it behaves as an independent developmental unit, under the control of a particular set of genes. The parasegments are initially similar but each will soon acquire its own unique identity mainly due to Hox genes.”

“Because plant cells have rigid cell walls and, unlike animal cells, cannot move, a plant’s development is very much the result of patterns of oriented cell divisions and increase in cell size. Despite this difference, cell fate in plant development is largely determined by similar means as in animals – by a combination of positional signals and intercellular communication. […] The logic behind the spatial layouts of gene expression that pattern a developing flower is similar to that of Hox gene action in patterning the body axis in animals, but the genes involved are completely different. One general difference between plant and animal development is that most of the development occurs not in the embryo but in the growing plant. Unlike an animal embryo, the mature plant embryo inside a seed is not simply a smaller version of the organism it will become. All the ‘adult’ structures of the plant – shoots, roots, stalks, leaves, and flowers – are produced in the adult plant from localized groups of undifferentiated cells known as meristems. […] Another important difference between plant and animal cells is that a complete, fertile plant can develop from a single differentiated somatic cell and not just from a fertilized egg. This suggests that, unlike the differentiated cells of adult animals, some differentiated cells of the adult plant may retain totipotency and so behave like animal embryonic stem cells. […] The small organic molecule auxin is one of the most important and ubiquitous chemical signals in plant development and plant growth.”

“All animal embryos undergo a dramatic change in shape during their early development. This occurs primarily during gastrulation, the process that transforms a two-dimensional sheet of cells into the complex three-dimensional animal body, and involves extensive rearrangements of cell layers and the directed movement of cells from one location to another. […] Change in form is largely a problem in cell mechanics and requires forces to bring about changes in cell shape and cell migration. Two key cellular properties involved in changes in animal embryonic form are cell contraction and cell adhesiveness. Contraction in one part of a cell can change the cell’s shape. Changes in cell shape are generated by forces produced by the cytoskeleton, an internal protein framework of filaments. Animal cells stick to one another, and to the external support tissue that surrounds them (the extracellular matrix), through interactions involving cell-surface proteins. Changes in the adhesion proteins at the cell surface can therefore determine the strength of cell–cell adhesion and its specificity. These adhesive interactions affect the surface tension at the cell membrane, a property that contributes to the mechanics of the cell behaviour. Cells can also migrate, with contraction again playing a key role. An additional force that operates during morphogenesis, particularly in plants but also in a few aspects of animal embryogenesis, is hydrostatic pressure, which causes cells to expand. In plants there is no cell movement or change in shape, and changes in form are generated by oriented cell division and cell expansion. […] Localized contraction can change the shape of the cells as well as the sheet they are in. For example, folding of a cell sheet—a very common feature in embryonic development—is caused by localized changes in cell shape […]. Contraction on one side of a cell results in it acquiring a wedge-like form; when this occurs among a few cells locally in a sheet, a bend occurs at the site, deforming the sheet.”

“The integrity of tissues in the embryo is maintained by adhesive interactions between cells and between cells and the extracellular matrix; differences in cell adhesiveness also help maintain the boundaries between different tissues and structures. Cells stick to each other by means of cell adhesion molecules, such as cadherins, which are proteins on the cell surface that can bind strongly to proteins on other cell surfaces. About 30 different types of cadherins have been identified in vertebrates. […] Adhesion of a cell to the extracellular matrix, which contains proteins such as collagen, is by the binding of integrins in the cell membrane to these matrix molecules. […] Convergent extension plays a key role in gastrulation of [some] animals and […] morphogenetic processes. It is a mechanism for elongating a sheet of cells in one direction while narrowing its width, and occurs by rearrangement of cells within the sheet, rather than by cell migration or cell division. […] For convergent extension to take place, the axes along which the cells will intercalate and extend must already have been defined. […] Gastrulation in vertebrates involves a much more dramatic and complex rearrangement of tissues than in sea urchins […] But the outcome is the same: the transformation of a two-dimensional sheet of cells into a three-dimensional embryo, with ectoderm, mesoderm, and endoderm in the correct positions for further development of body structure. […] Directed dilation is an important force in plants, and results from an increase in hydrostatic pressure inside a cell. Cell enlargement is a major process in plant growth and morphogenesis, providing up to a fiftyfold increase in the volume of a tissue. The driving force for expansion is the hydrostatic pressure exerted on the cell wall as a result of the entry of water into cell vacuoles by osmosis. Plant-cell expansion involves synthesis and deposition of new cell-wall material, and is an example of directed dilation. The direction of cell growth is determined by the orientation of the cellulose fibrils in the cell wall.”

Links:

Developmental biology.
August Weismann. Hans Driesch. Hans Spemann. Hilde Mangold. Spemann-Mangold organizer.
Induction. Cleavage.
Developmental model organisms.
Blastula. Embryo. Ectoderm. Mesoderm. Endoderm.
Gastrulation.
Xenopus laevis.
Notochord.
Neurulation.
Organogenesis.
DNA. Gene. Protein. Transcription factor. RNA polymerase.
Epiblast. Trophoblast/trophectoderm. Inner cell mass.
Pluripotency.
Polarity in embryogenesis/animal-vegetal axis.
Primitive streak.
Hensen’s node.
Neural tube. Neural fold. Neural crest cells.
Situs inversus.
Gene silencing. Morpholino.
Drosophila embryogenesis.
Pair-rule gene.
Cell polarity.
Mosaic vs regulative development.
Caenorhabditis elegans.
Fate mapping.
Plasmodesmata.
Arabidopsis thaliana.
Apical-basal axis.
Hypocotyl.
Phyllotaxis.
Primordium.
Quiescent centre.
Filopodia.
Radial cleavage. Spiral cleavage.

June 11, 2018 Posted by | Biology, Books, Botany, Genetics, Molecular biology | Leave a comment

Words

Most of the words included below are words which I encountered while reading the Tom Holt novels Ye Gods!Here Comes The SunGrailblazers, and Flying Dutch, as well as Lewis Wolpert’s Developmental Biology and Parminder & Swales’s text 100 Cases in Orthopaedics and Rheumatology.

Epigraphy. Plangent. Simony. Simpulum. Testoon. Sybarite/sybaritic. Culverin. Niff. Gavotte. Welch. Curtilage. Basilar. Dusack. Galliard. Foolscap. Spinet. Netsuke. Pinny. Shufti. Foumart.

Compere. Triune. Sistrum. Tenon. Buckshee. Jink. Chiropody. Slingback. NarthexNidus. Subluxation. Aponeurosis. Psoas. Articular. Varus. Valgus. Talus. Orthosis/orthotics. Acetabulum. Labrum.

Peculation. Purler. Macédoine. Denticle. Inflorescence. Invagination. Intercalate. Antalgic. Chondral. Banjax. Bodge/peck. Remora. Chicory. Gantry. Aerate. Erk. Recumbent. Pootle. Stylus. Vamplate.

Tappet. Frumenty. Woad. Breviary. Witter. Errantry. Pommy. Lychee. Priory. Bourse. Phylloxera. Dozy. Whitlow. Crampon. Brill. Fiddly. Acrostic. Scrotty. Ricasso. Tetchy.

June 10, 2018 Posted by | Books, Language | Leave a comment

Blood (II)

Below I have added some quotes from the chapters of the book I did not cover in my first post, as well as some supplementary links.

Haemoglobin is of crucial biological importance; it is also easy to obtain safely in large quantities from donated blood. These properties have resulted in its becoming the most studied protein in human history. Haemoglobin played a key role in the history of our understanding of all proteins, and indeed the science of biochemistry itself. […] Oxygen transport defines the primary biological function of blood. […] Oxygen gas consists of two atoms of oxygen bound together to form a symmetrical molecule. However, oxygen cannot be transported in the plasma alone. This is because water is very poor at dissolving oxygen. Haemoglobin’s primary function is to increase this solubility; it does this by binding the oxygen gas on to the iron in its haem group. Every haem can bind one oxygen molecule, increasing the amount of oxygen able to dissolve in the blood.”

“An iron atom can exist in a number of different forms depending on how many electrons it has in its atomic orbitals. In its ferrous (iron II) state iron can bind oxygen readily. The haemoglobin protein has therefore evolved to stabilize its haem iron cofactor in this ferrous state. The result is that over fifty times as much oxygen is stored inside the confines of the red blood cell compared to outside in the watery plasma. However, using iron to bind oxygen comes at a cost. Iron (II) can readily lose one of its electrons to the bound oxygen, a process called ‘oxidation’. So the same form of iron that can bind oxygen avidly (ferrous) also readily reacts with that same oxygen forming an unreactive iron III state, called ‘ferric’. […] The complex structure of the protein haemoglobin is required to protect the ferrous iron from oxidizing. The haem iron is held in a precise configuration within the protein. Specific amino acids are ideally positioned to stabilize the iron–oxygen bond and prevent it from oxidizing. […] the iron stays ferrous despite the presence of the nearby oxygen. Having evolved over many hundreds of millions of years, this stability is very difficult for chemists to mimic in the laboratory. This is one reason why, desirable as it might be in terms of cost and convenience, it is not currently possible to replace blood transfusions with a simple small chemical iron oxygen carrier.”

“Given the success of the haem iron and globin combination in haemoglobin, it is no surprise that organisms have used this basic biochemical architecture for a variety of purposes throughout evolution, not just oxygen transport in blood. One example is the protein myoglobin. This protein resides inside animal cells; in the human it is found in the heart and skeletal muscle. […] Myoglobin has multiple functions. Its primary role is as an aid to oxygen diffusion. Whereas haemoglobin transports oxygen from the lung to the cell, myoglobin transports it once it is inside the cell. As oxygen is so poorly soluble in water, having a chain of molecules inside the cell that can bind and release oxygen rapidly significantly decreases the time it takes the gas to get from the blood capillary to the part of the cell—the mitochondria—where it is needed. […] Myoglobin can also act as an emergency oxygen backup store. In humans this is trivial and of questionable importance. Not so in diving mammals such as whales and dolphins that have as much as thirty times the myoglobin content of the terrestrial equivalent; indeed those mammals that dive for the longest duration have the most myoglobin. […] The third known function of myoglobin is to protect the muscle cells from damage by nitric oxide gas.”

“The heart is the organ that pumps blood around the body. If the heart stops functioning, blood does not flow. The driving force for this flow is the pressure difference between the arterial blood leaving the heart and the returning venous blood. The decreasing pressure in the venous side explains the need for unidirectional valves within veins to prevent the blood flowing in the wrong direction. Without them the return of the blood through the veins to the heart would be too slow, especially when standing up, when the venous pressure struggles to overcome gravity. […] normal [blood pressure] ranges rise slowly with age. […] high resistance in the arterial circulation at higher blood pressures [places] additional strain on the left ventricle. If the heart is weak, it may fail to achieve the extra force required to pump against this resistance, resulting in heart failure. […] in everyday life, a low blood pressure is rarely of concern. Indeed, it can be a sign of fitness as elite athletes have a much lower resting blood pressure than the rest of the population. […] the effect of exercise training is to thicken the muscles in the walls of the heart and enlarge the chambers. This enables more blood to be pumped per beat during intense exercise. The consequence of this extra efficiency is that when an athlete is resting—and therefore needs no more oxygen than a more sedentary person—the heart rate and blood pressure are lower than average. Most people’s experience of hypotension will be reflected by dizzy spells and lack of balance, especially when moving quickly to an upright position. This is because more blood pools in the legs when you stand up, meaning there is less blood for the heart to pump. The immediate effect should be for the heart to beat faster to restore the pressure. If there is a delay, the decrease in pressure can decrease the blood flow to the brain and cause dizziness; in extreme cases this can lead to fainting.”

“If hypertension is persistent, patients are most likely to be treated with drugs that target specific pathways that the body uses to control blood pressure. For example angiotensin is a protein that can trigger secretion of the hormone aldosterone from the adrenal gland. In its active form angiotensin can directly constrict blood vessels, while aldosterone enhances salt and water retention, so raising blood volume. Both these effects increase blood pressure. Angiotensin is converted into its active form by an enzyme called ‘Angiotensin Converting Enzyme’ (ACE). An ACE inhibitor drug prevents this activity, keeping angiotensin in its inactive form; this will therefore drop the patient’s blood pressure. […] The metal calcium controls many processes in the body. Its entry into muscle cells triggers muscle contraction. Preventing this entry can therefore reduce the force of contraction of the heart and the ability of arteries to constrict. Both of these will have the effect of decreasing blood pressure. Calcium enters muscle cells via specific protein-based channels. Drugs that block these channels (calcium channel blockers) are therefore highly effective at treating hypertension.”

Autoregulation is a homeostatic process designed to ensure that blood flow remains constant [in settings where constancy is desirable]. However, there are many occasions when an organism actively requires a change in blood flow. It is relatively easy to imagine what these are. In the short term, blood supplies oxygen and nutrients. When these are used up rapidly, or their supply becomes limited, the response will be to increase blood flow. The most obvious example is the twenty-fold increase in oxygen and glucose consumption that occurs in skeletal muscle during exercise when compared to rest. If there were no accompanying increase in blood flow to the muscle the oxygen supply would soon run out. […] There are hundreds of molecules known that have the ability to increase or decrease blood flow […] The surface of all blood vessels is lined by a thin layer of cells, the ‘endothelium’. Endothelial cells form a barrier between the blood and the surrounding tissue, controlling access of materials into and out of the blood. For example white blood cells can enter or leave the circulation via interacting with the endothelium; this is the route by which neutrophils migrate from the blood to the site of tissue damage or bacterial/viral attack as part of the innate immune response. However, the endothelium is not just a selective barrier. It also plays an active role in blood physiology and biochemistry.”

“Two major issues [related to blood transfusions] remained at the end of the 19th century: the problem of clotting, which all were aware of; and the problem of blood group incompatbility, which no one had the slightest idea even existed. […] For blood transfusions to ever make a recovery the key issues of blood clotting and adverse side effects needed to be resolved. In 1875 the Swedish biochemist Olof Hammarsten showed that adding calcium accelerated the rate of blood clotting (we now know the mechanism for this is that key enzymes in blood platelets that catalyse fibrin formation require calcium for their function). It therefore made sense to use chemicals that bind calcium to try to prevent clotting. Calcium ions are positively charged; adding negatively charged ions such as oxalate and citrate neutralized the calcium, preventing its clot-promoting action. […] At the same time as anticoagulants were being discovered, the reason why some blood transfusions failed even when there were no clots was becoming clear. It had been shown that animal blood given to humans tended to clump together or agglutinate, eventually bursting and releasing free haemoglobin and causing kidney damage. In the early 1900s, working in Vienna, Karl Landsteiner showed the same effect could occur with human-to-human transfusion. The trick was the ability to separate blood cells from serum. This enabled mixing blood cells from a variety of donors with plasma from a variety of participants. Using his laboratory staff as subjects, Landsteiner showed that only some combinations caused the agglutination reaction. Some donor cells (now known as type O) never clumped. Others clumped depending on the nature of the plasma in a reproducible manner. A careful study of Landsteiner’s results revealed the ABO blood type distinctions […]. Versions of these agglutination tests still form the basis of checking transfused blood today.”

“No blood product can be made completely sterile, no matter how carefully it is processed. The best that can be done is to ensure that no new bacteria or viruses are added during the purification, storage, and transportation processes. Nothing can be done to inactivate any viruses that are already present in the donor’s blood, for the harsh treatments necessary to do this would inevitably damage the viability of the product or be prohibitively expensive to implement on the industrial scale that the blood market has become. […] In the 1980s over half the US haemophiliac population was HIV positive.”

“Three fundamentally different ways have been attempted to replace red blood cell transfusions. The first uses a completely chemical approach and makes use of perfluorocarbons, inert chemicals that, in liquid form, can dissolve gasses without reacting with them. […] Perfluorocarbons can dissolve oxygen much more effectively than water. […] The problem with their use as a blood substitute is that the amount of oxygen dissolved in these solutions is linear with increasing pressure. This means that the solution lacks the advantages of the sigmoidal binding curve of haemoglobin, which has evolved to maximize the amount of oxygen captured from the limited fraction found in air (20 per cent oxygen). However, to deliver the same amount of oxygen as haemoglobin, patients using the less efficient perfluorocarbons in their blood need to breathe gas that is almost 100 per cent pure oxygen […]; this restricts the use of these compounds. […] The second type of blood substitute makes use of haemoglobin biology. Initial attempts used purified haemoglobin itself. […] there is no haemoglobin-based blood substitute in general use today […] The problem for the lack of uptake is not that blood substitutes cannot replace red blood cell function. A variety of products have been shown to stay in the vasculature for several days, provide volume support, and deliver oxygen. However, they have suffered due to adverse side effects, most notably cardiac complications. […] In nature the plasma proteins haptoglobin and haemopexin bind and detoxify any free haemoglobin and haem released from red blood cells. The challenge for blood substitute research is to mimic these effects in a product that can still deliver oxygen. […] Despite ongoing research, these problems may prove to be insurmountable. There is therefore interest in a third approach. This is to grow artificial red blood cells using stem cell technology.”

Links:

Porphyrin. Globin.
Felix Hoppe-Seyler. Jacques Monod. Jeffries Wyman. Jean-Pierre Changeux.
Allosteric regulation. Monod-Wyman-Changeux model.
Structural Biochemistry/Hemoglobin (wikibooks). (Many of the topics covered in this link – e.g. comments on affinity, T/R-states, oxygen binding curves, the Bohr effect, etc. – are also covered in the book, so although I do link to some of the other topics also covered in this link below it should be noted that I did in fact leave out quite a few potentially relevant links on account of those topics being covered in the above link).
1,3-Bisphosphoglycerate.
Erythrocruorin.
Haemerythrin.
Hemocyanin.
Cytoglobin.
Neuroglobin.
Sickle cell anemia. Thalassaemia. Hemoglobinopathy. Porphyria.
Pulse oximetry.
Daniel Bernoulli. Hydrodynamica. Stephen Hales. Karl von Vierordt.
Arterial line.
Sphygmomanometer. Korotkoff sounds. Systole. Diastole. Blood pressure. Mean arterial pressure. Hypertension. Antihypertensive drugs. Atherosclerosis Pathology. Beta blocker. Diuretic.
Autoregulation.
Guanylate cyclase. Glyceryl trinitrate.
Blood transfusion. Richard Lower. Jean-Baptiste Denys. James Blundell.
Parabiosis.
Penrose Inquiry.
ABLE (Age of Transfused Blood in Critically Ill Adults) trial.
RECESS trial.

June 7, 2018 Posted by | Biology, Books, Cardiology, Chemistry, History, Medicine, Molecular biology, Pharmacology, Studies | Leave a comment

Molecular biology (III)

Below I have added a few quotes and links related to the last few chapters of the book‘s coverage.

“Normal ageing results in part from exhaustion of stem cells, the cells that reside in most organs to replenish damaged tissue. As we age DNA damage accumulates and this eventually causes the cells to enter a permanent non-dividing state called senescence. This protective ploy however has its downside as it limits our lifespan. When too many stem cells are senescent the body is compromised in its capacity to renew worn-out tissue, causing the effects of ageing. This has a knock-on effect of poor intercellular communication, mitochondrial dysfunction, and loss of protein balance (proteostasis). Low levels of chronic inflammation also increase with ageing and could be the trigger for changes associated with many age-related disorders.”

“There has been a dramatic increase in ageing research using yeast and invertebrates, leading to the discovery of more ‘ageing genes’ and their pathways. These findings can be extrapolated to humans since longevity pathways are conserved between species. The major pathways known to influence ageing have a common theme, that of sensing and metabolizing nutrients. […] The field was advanced by identification of the mammalian Target Of Rapamycin, aptly named mTOR. mTOR acts as a molecular sensor that integrates growth stimuli with nutrient and oxygen availability. Small molecules such as rapamycin that reduce mTOR signalling act in a similar way to severe dietary restriction in slowing the ageing process in organisms such as yeast and worms. […] Rapamycin and its derivatives (rapalogs) have been involved in clinical trials on reducing age-related pathologies […] Another major ageing pathway is telomere maintenance. […] Telomere attrition is a hallmark of ageing and studies have established an association between shorter telomere length (TL) and the risk of various common age-related ailments […] Telomere loss is accelerated by known determinants of ill health […] The relationship between TL and cancer appears complex.”

“Cancer is not a single disease but a range of diseases caused by abnormal growth and survival of cells that have the capacity to spread. […] One of the early stages in the acquisition of an invasive phenotype is epithelial-mesenchymal transition (EMT). Epithelial cells form skin and membranes and for this they have a strict polarity (a top and a bottom) and are bound in position by close connections with adjacent cells. Mesenchymal cells on the other hand are loosely associated, have motility, and lack polarization. The transition between epithelial and mesenchymal cells is a normal process during embryogenesis and wound healing but is deregulated in cancer cells. EMT involves transcriptional reprogramming in which epithelial structural proteins are lost and mesenchymal ones acquired. This facilitates invasion of a tumour into surrounding tissues. […] Cancer is a genetic disease but mostly not inherited from the parents. Normal cells evolve to become cancer cells by acquiring successive mutations in cancer-related genes. There are two main classes of cancer genes, the proto-oncogenes and the tumour suppressor genes. The proto-oncogenes code for protein products that promote cell proliferation. […] A mutation in a proto-oncogene changes it to an ‘oncogene’ […] One gene above all others is associated with cancer suppression and that is TP53. […] approximately half of all human cancers carry a mutated TP53 and in many more, p53 is deregulated. […] p53 plays a key role in eliminating cells that have either acquired activating oncogenes or excessive genomic damage. Thus mutations in the TP53 gene allows cancer cells to survive and divide further by escaping cell death […] A mutant p53 not only lacks the tumour suppressor functions of the normal or wild type protein but in many cases it also takes on the role of an oncogene. […] Overall 5-10 per cent of cancers occur due to inherited or germ line mutations that are passed from parents to offspring. Many of these genes code for DNA repair enzymes […] The vast majority of cancer mutations are not inherited; instead they are sporadic with mutations arising in somatic cells. […] At least 15 per cent of cancers are attributable to infectious agents, examples being HPV and cervical cancer, H. pylori and gastric cancer, and also hepatitis B or C and liver cancer.”

“There are about 10 million different sites at which people can vary in their DNA sequence withing the 3 billion bases in our DNA. […] A few, but highly variable sequences or minisatellites are chosen for DNA profiling. These give a highly sensitive procedure suitable for use with small amounts of body fluids […] even shorter sequences called microsatellite repeats [are also] used. Each marker or microsatellite is a short tandem repeat (STR) of two to five base pairs of DNA sequence. A single STR will be shared by up to 20 per cent of the population but by using a dozen or so identification markers in profile, the error is miniscule. […] Microsatellites are extremely useful for analysing low-quality or degraded DNA left at a crime scene as their short sequences are usually preserved. However, DNA in specimens that have not been optimally preserved persists in exceedingly small amounts and is also highly fragmented. It is probably also riddled by contamination and chemical damage. Such sources of DNA sources of DNA are too degraded to obtain a profile using genomic STRs and in these cases mitochondrial DNA, being more abundant, is more useful than nuclear DNA for DNA profiling. […]  Mitochondrial DNA profiling is the method of choice for determining the identities of missing or unknown people when a maternally linked relative can be found. Molecular biologists can amplify hypervariable regions of mitochondrial DNA by PCR to obtain enough material for analysis. The DNA products are sequenced and single nucleotide differences are sought with a reference DNA from a maternal relative. […] It has now become possible for […] ancient DNA to reveal much more than genotype matches. […] Pigmentation characteristics can now be determined from ancient DNA since skin, hair, and eye colour are some of the easiest characteristics to predict. This is due to the limited number of base differences or SNPs required to explain most of the variability.”

“A broad range of debilitating and fatal conditions, non of which can be cured, are associated with mitochondrial DNA mutations. […] [M]itochondrial DNA mutates ten to thirty times faster than nuclear DNA […] Mitochondrial DNA mutates at a higher rate than nuclear DNA due to higher numbers of DNA molecules and reduced efficiency in controlling DNA replication errors. […] Over 100,000 copies of mitochondrial DNA are present in the cytoplasm of the human egg or oocyte. After fertilization, only maternal mitochondria survive; the small numbers of the father’s mitochondria in the zygote are targeted for destruction. Thus all mitochondrial DNA for all cell types in the resulting embryo is maternal-derived. […] Patients affected by mitochondrial disease usually have a mixture of wild type (normal) and mutant mitochondrial DNA and the disease severity depends on the ratio of the two. Importantly the actual level of mutant DNA in a mother’s heteroplas[m]y […curiously the authors throughout the coverage insist on spelling this ‘heteroplasty’, which according to google is something quite different – I decided to correct the spelling error (?) here – US] is not inherited and offspring can be better or worse off than the mother. This also causes uncertainty since the ratio of wild type to mutant mitochondria may change during development. […] Over 700 mutations in mitochondrial DNA have been found leading to myopathies, neurodegeneration, diabetes, cancer, and infertility.”

Links:

Dementia. Alzheimer’s disease. Amyloid hypothesis. Tau protein. Proteopathy. Parkinson’s disease. TP53-inducible glycolysis and apoptosis regulator (TIGAR).
Progeria. Progerin. Werner’s syndrome. Xeroderma pigmentosum. Cockayne syndrome.
Shelterin.
Telomerase.
Alternative lengthening of telomeres: models, mechanisms and implications (Nature).
Coats plus syndrome.
Neoplasia. Tumor angiogenesis. Inhibitor protein MDM2.
Li–Fraumeni syndrome.
Non-coding RNA networks in cancer (Nature).
Cancer stem cell. (“The reason why current cancer therapies often fail to eradicate the disease is that the CSCs survive current DNA damaging treatments and repopulate the tumour.” See also this IAS lecture which covers closely related topics – US.)
Imatinib.
Restriction fragment length polymorphism (RFLP).
CODIS.
MC1R.
Archaic human admixture with modern humans.
El Tor strain.
DNA barcoding.
Hybrid breakdown/-inviability.
Trastuzumab.
Digital PCR.
Pearson’s syndrome.
Mitochondrial replacement therapy.
Synthetic biology.
Artemisinin.
Craig Venter.
Genome editing.
Indel.
CRISPR.
Tyrosinemia.

June 3, 2018 Posted by | Biology, Books, Cancer/oncology, Genetics, Medicine, Molecular biology | Leave a comment

Blood (I)

As I also mentioned on goodreads I was far from impressed with the first few pages of this book – but I read on, and the book actually turned out to include a decent amount of very reasonable coverage. Taking into consideration the way the author started out the three star rating should be considered a high rating, and in some parts of the book the author covers very complicated stuff in a really very decent manner, considering the format of the book and its target group.

Below I have added some quotes and some links to topics/people/ideas/etc. covered in the first half of the book.

“[Clotting] makes it difficult to study the components of blood. It also [made] it impossible to store blood for transfusion [in the past]. So there was a need to find a way to prevent clotting. Fortunately the discovery that the metal calcium accelerated the rate of clotting enabled the development of a range of compounds that bound calcium and therefore prevented this process. One of them, citrate, is still in common use today [here’s a relevant link, US] when blood is being prepared for storage, or to stop blood from clotting while it is being pumped through kidney dialysis machines and other extracorporeal circuits. Adding citrate to blood, and leaving it alone, will result in gravity gradually separating the blood into three layers; the process can be accelerated by rapid spinning in a centrifuge […]. The top layer is clear and pale yellow or straw-coloured in appearance. This is the plasma, and it contains no cells. The bottom layer is bright red and contains the dense pellet of red cells that have sunk to the bottom of the tube. In-between these two layers is a very narrow layer, called the ‘buffy coat’ because of its pale yellow-brown appearance. This contains white blood cells and platelets. […] red cells, white cells, and platelets […] define the primary functions of blood: oxygen transport, immune defence, and coagulation.”

“The average human has about five trillion red blood cells per litre of blood or thirty trillion […] in total, making up a quarter of the total number of cells in the body. […] It is clear that the red cell has primarily evolved to perform a single function, oxygen transportation. Lacking a nucleus, and the requisite machinery to control the synthesis of new proteins, there is a limited ability for reprogramming or repair. […] each cell [makes] a complete traverse of the body’s circulation about once a minute. In its three- to four-month lifetime, this means every cell will do the equivalent of 150,000 laps around the body. […] Red cells lack mitochondria; they get their energy by fermenting glucose. […] A prosaic explanation for their lack of mitochondria is that it prevents the loss of any oxygen picked up from the lungs on the cells’ journey to the tissues that need it. The shape of the red cell is both deformable and elastic. In the bloodstream each cell is exposed to large shear forces. Yet, due to the properties of the membrane, they are able to constrict to enter blood vessels smaller in diameter than their normal size, bouncing back to their original shape on exiting the vessel the other side. This ability to safely enter very small openings allows capillaries to be very small. This in turn enables every cell in the body to be close to a capillary. Oxygen consequently only needs to diffuse a short distance from the blood to the surrounding tissue; this is vital as oxygen diffusion outside the bloodstream is very slow. Various pathologies, such as diabetes, peripheral vascular disease, and septic shock disturb this deformability of red blood cells, with deleterious consequences.”

“Over thirty different substances, proteins and carbohydrates, contribute to an individual’s blood group. By far the best known are the ABO and Rhesus systems. This is not because the proteins and carbohydrates that comprise these particular blood group types are vitally important for red cell function, but rather because a failure to account for these types during a blood transfusion can have catastrophic consequences. The ABO blood group is sugar-based […] blood from an O person can be safely given to anyone (with no sugar antigens this person is a ‘universal’ donor). […] As all that is needed to convert A and B to O is to remove a sugar, there is commercial and medical interest in devising ways to do this […] the Rh system […] is protein-based rather than sugar based. […] Rh proteins sit in the lipid membrane of the cell and control the transport of molecules into and out of the cell, most probably carbon dioxide and ammonia. The situation is complex, with over thirty different subgroups relating to subtle differences in the protein structure.”

“Unlike the red cells, all white cell subtypes contain nuclei. Some also contain on their surface a set of molecules called the ‘major histocompatibility complex’ (MHC). In humans, these receptors are also called ‘human leucocyte antigens’ (HLA). Their role is to recognize fragments of protein from pathogens and trigger the immune response that will ultimately destroy the invaders. Crudely, white blood cells can be divided into those that attack ‘on sight’ any foreign material — whether it be a fragment of inanimate material such as a splinter or an invading microorganism — and those that form part of a defence mechanism that recognizes specific biomolecules and marshals a slower, but equally devastating response. […] cells of the non-specific (or innate) immune system […] are divided into those that have nuclei with multiple lobed shapes (polymorphonuclear leukocytes or PMN) and those that have a single lobe nucleus ([…] ‘mononuclear leucocytes‘ or ‘MN’). PMN contain granules inside them and so are sometimes called ‘granulocytes‘.”

“Neutrophils are by far the most abundant PMN, making up over half of the total white blood cell count. The primary role of a neutrophil is to engulf a foreign object such as an invading microorganism. […] Eosinophils and basophils are the least abundant PMN cell type, each making up less than 2 per cent of white blood cells. The role of basophils is to respond to tissue injury by triggering an inflammatory response. […] When activated, basophils and mast cells degranulate, releasing molecules such as histamine, leukotrienes, and cytokines. Some of these molecules trigger an increase in blood flow causing redness and heat in the damaged site, others sensitize the area to pain. Greater permeability of the blood vessels results in plasma leaking out of the vessels and into the surrounding tissue at an increased rate, causing swelling. […] This is probably an evolutionary adaption to prevent overuse of a damaged part of the body but also helps to bring white cells and proteins to the damaged, inflamed area. […] The main function of eosinophils is to tackle invaders too large to be engulfed by neutrophils, such as the multicellular parasitic tapeworms and nematodes. […] Monocytes are a type of mononuclear leucocyte (MN) making up about 5 per cent of white blood cells. They spend even less tiem in the circulation than neutrophils, generally less than ten hours, but their time in the blood circulation does not end in death. Instead, they are converted into a cell called a ‘macrophage‘ […] Their role is similar to the neutrophil, […] the ultimate fate of both the red blood cell and the neutrophil is to be engulfed by a macrophage. An excess of monocytes in a blood count (monocytosis) is an indicator of chronic inflammation”.

“Blood has to flow freely. Therefore, the red cells, white cells, and platelets are all suspended in a watery solution called ‘plasma’. But plasma is more than just water. In fact if it were only water all the cells would burst. Plasma has to have a very similar concentration of molecules and ions as the cells. This is because cells are permeable to water. So if the concentration of dissolved substances in the plasma was significantly higher than that in the cells, water would flow from the cells to the plasma in an attempt to equalize this gradient by diluting the plasma; this would result in cell shrinkage. Even worse, if the concentration in the plasma was lower than in the cells, water would flow into the cells from the plasma, and the resulting pressure increase would burst the cells, releasing all their contents into the plasma in the process. […] Plasma contains much more than just the ions required to prevent cells bursting or shrinking. It also contains key components designed to assist in cellular function. The protein clotting factors that are part of the coagulation cascade are always present in low concentrations […] Low levels of antibodies, produced by the lymphocytes, circulate […] In addition to antibodies, the plasma contains C-reactive proteins, Mannose-binding lectin and complement proteins that function as ‘opsonins‘ […] A host of other proteins perform roles independent of oxygen delivery or immune defence. By far the most abundant protein in serum is albumin. […] Blood is the transport infrastructure for any molecule that needs to be moved around the body. Some, such as the water-soluble fuel glucose, and small hormones like insulin, dissolve freely in the plasma. Others that are less soluble hitch a ride on proteins [….] Dangerous reactive molecules, such as iron, are also bound to proteins, in this case transferrin.”

Immunoglobulins are produced by B lymphocytes and either remain bound on the surface of the cell (as part of the B cell receptor) or circulate freely in the plasma (as antibodies). Whatever their location, their purpose is the same – to bind to and capture foreign molecules (antigens). […] To perform the twin role of binding the antigen and the phagocytosing cell, immunoglobulins need to have two distinct parts to their structure — one that recognizes the foreign antigen and one that can be recognized — and destroyed — by the host defence system. The host defence system does not vary; a specific type of immunoglobulin will be recognized by one of the relatively few types of immune cells or proteins. Therefore this part of the immunoglobulin structure is not variable. But the nature of the foreign antigen will vary greatly; so the antigen-recognizing part of the structure must be highly variable. It is this that leads to the great variety of immunoglobulins. […] within the blood there is an army of potential binding sites that can recognize and bind to almost any conceivable chemical structure. Such variety is why the body is able to adapt and kill even organisms it has never encountered before. Indeed the ability to make an immunoglobulin recognize almost any structure has resulted in antibody binding assays being used historically in diagnostic tests ranging from pregnancy to drugs testing.”

“[I]mmunoglobulins consist of two different proteins — a heavy chain and a light chain. In the human heavy chain there are about forty different V (variable) segments, twenty-five different D (Diversity) segments, and six J (Joining) segments. The light chain also contains variable V and J segments. A completed immunoglobulin has a heavy chain with only one V, D, and J segment, and a light chain with only one V and D segment. It is the shuffling of these segments during development of the mature B lymphocyte that creates the diversity required […] the hypervariable regions are particularly susceptible to mutation during development. […] A separate class of immunoglobulin-like molecules also provide the key to cell-to-cell communication in the immune system. In humans, with the exception of the egg and sperm cells, all cells that possess a nucleus also have a protein on their surface called ‘Human Leucocyte Antigen (HLA) Class I’. The function of HLA Class I is to display fragments (antigens) of all the proteins currently being made inside the cell. It therefore acts like a billboard displaying the current highlights of cellular activity. Any proteins recognized as non-self by cytotoxic T cell lymphocytes will result in the whole cell being targeted for destruction […]. Another form of HLA, Class II, is only present on the surface of specialized cells of the immune system termed antigen presenting cells. In contrast to HLA Class I, the surface of HLA Class II cells displays antigens that originate from outside of the cell.”

Galen.
Bloodletting.
Marcello Malpighi.
William Harvey. De Motu Cordis.
Andreas Vesalius. De humani corporis fabrica.
Ibn al-Nafis. Michael Servetus. Realdo Colombo. Andrea Cesalpino.
Pulmonary circulation.
Hematopoietic stem cell. Bone marrow. Erythropoietin.
Hemoglobin.
Anemia.
Peroxidase.
Lymphocytes. NK cells. Granzyme. B lymphocytes. T lymphocytes. Antibody/Immunoglobulin. Lymphoblast.
Platelet. Coagulation cascade. Fibrinogen. Fibrin. Thrombin. Haemophilia. Hirudin. Von Willebrand disease. Haemophilia A. -ll- B.
Tonicity. Colloid osmotic pressure.
Adaptive immune system. Vaccination. VariolationAntiserum. Agostino Bassi. Muscardine. Louis Pasteur. Élie Metchnikoff. Paul Ehrlich.
Humoral immunity. Membrane attack complex.
Niels Kaj Jerne. David Talmage. Frank Burnet. Clonal selection theory. Peter Medawar.
Susumu Tonegawa.

June 2, 2018 Posted by | Biology, Books, Immunology, Medicine, Molecular biology | Leave a comment

Molecular biology (II)

Below I have added some more quotes and links related to the book’s coverage:

“[P]roteins are the most abundant molecules in the body except for water. […] Proteins make up half the dry weight of a cell whereas DNA and RNA make up only 3 per cent and 20 per cent respectively. […] The approximately 20,000 protein-coding genes in the human genome can, by alternative splicing, multiple translation starts, and post-translational modifications, produce over 1,000,000 different proteins, collectively called ‘the proteome‘. It is the size of the proteome and not the genome that defines the complexity of an organism. […] For simple organisms, such as viruses, all the proteins coded by their genome can be deduced from its sequence and these comprise the viral proteome. However for higher organisms the complete proteome is far larger than the genome […] For these organisms not all the proteins coded by the genome are found in any one tissue at any one time and therefore a partial proteome is usually studied. What are of interest are those proteins that are expressed in specific cell types under defined conditions.”

“Enzymes are proteins that catalyze or alter the rate of chemical reactions […] Enzymes can speed up reactions […] but they can also slow some reactions down. Proteins play a number of other critical roles. They are involved in maintaining cell shape and providing structural support to connective tissues like cartilage and bone. Specialized proteins such as actin and myosin are required [for] muscular movement. Other proteins act as ‘messengers’ relaying signals to regulate and coordinate various cell processes, e.g. the hormone insulin. Yet another class of protein is the antibodies, produced in response to foreign agents such as bacteria, fungi, and viruses.”

“Proteins are composed of amino acids. Amino acids are organic compounds with […] an amino group […] and a carboxyl group […] In addition, amino acids carry various side chains that give them their individual functions. The twenty-two amino acids found in proteins are called proteinogenic […] but other amino acids exist that are non-protein functioning. […] A peptide bond is formed between two amino acids by the removal of a water molecule. […] each individual unit in a peptide or protein is known as an amino acid residue. […] Chains of less than 50-70 amino acid residues are known as peptides or polypeptides and >50-70 as proteins, although many proteins are composed of more than one polypeptide chain. […] Proteins are macromolecules consisting of one or more strings of amino acids folded into highly specific 3D-structures. Each amino acid has a different size and carries a different side group. It is the nature of the different side groups that facilitates the correct folding of a polypeptide chain into a functional tertiary protein structure.”

“Atoms scatter the waves of X-rays mainly through their electrons, thus forming secondary or reflected waves. The pattern of X-rays diffracted by the atoms in the protein can be captured on a photographic plate or an image sensor such as a charge coupled device placed behind the crystal. The pattern and relative intensity of the spots on the diffraction image are then used to calculate the arrangement of atoms in the original protein. Complex data processing is required to convert the series of 2D diffraction or scatter patterns into a 3D image of the protein. […] The continued success and significance of this technique for molecular biology is witnessed by the fact that almost 100,000 structures of biological molecules have been determined this way, of which most are proteins.”

“The number of proteins in higher organisms far exceeds the number of known coding genes. The fact that many proteins carry out multiple functions but in a regulated manner is one way a complex proteome arises without increasing the number of genes. Proteins that performed a single role in the ancestral organism have acquired extra and often disparate functions through evolution. […] The active site of an enzyme employed in catalysis is only a small part of the protein, leaving spare capacity for acquiring a second function. […] The glycolytic pathway is involved in the breakdown of sugars such as glucose to release energy. Many of the highly conserved and ancient enzymes from this pathway have developed secondary or ‘moonlighting’ functions. Proteins often change their location in the cell in order to perform a ‘second job’. […] The limited size of the genome may not be the only evolutionary pressure for proteins to moonlight. Combining two functions in one protein can have the advantage of coordinating multiple activities in a cell, enabling it to respond quickly to changes in the environment without the need for lengthy transcription and translational processes.”

Post-translational modifications (PTMs) […] is [a] process that can modify the role of a protein by addition of chemical groups to amino acids in the peptide chain after translation. Addition of phosphate groups (phosphorylation), for example, is a common mechanism for activating or deactivating an enzyme. Other common PTMs include addition of acetyl groups (acetylation), glucose (glucosylation), or methyl groups (methylation). […] Some additions are reversible, facilitating the switching between active and inactive states, and others are irreversible such as marking a protein for destruction by ubiquitin. [The difference between reversible and irreversible modifications can be quite important in pharmacology, and if you’re curious to know more about these topics Coleman’s drug metabolism text provide great coverage of related topics – US.] Diseases caused by malfunction of these modifications highlight the importance of PTMs. […] in diabetes [h]igh blood glucose lead to unwanted glocosylation of proteins. At the high glucose concentrations associated with diabetes, an unwanted irreversible chemical reaction binds the gllucose to amino acid residues such as lysines exposed on the protein surface. The glucosylated proteins then behave badly, cross-linking themselves to the extracellular matrix. This is particularly dangerous in the kidney where it decreases function and can lead to renal failure.”

“Twenty thousand protein-coding genes make up the human genome but for any given cell only about half of these are expressed. […] Many genes get switched off during differentiation and a major mechanism for this is epigenetics. […] an epigenetic trait […] is ‘a stably heritable phenotype resulting from changes in the chromosome without alterations in the DNA sequence’. Epigenetics involves the chemical alteration of DNA by methyl or other small molecular groups to affect the accessibility of a gene by the transcription machinery […] Epigenetics can […] act on gene expression without affecting the stability of the genetic code by modifying the DNA, the histones in chromatin, or a whole chromosome. […] Epigenetic signatures are not only passed on to somatic daughter cells but they can also be transferred through the germline to the offspring. […] At first the evidence appeared circumstantial but more recent studies have provided direct proof of epigenetic changes involving gene methylation being inherited. Rodent models have provided mechanistic evidence. […] the importance of epigenetics in development is highlighted by the fact that low dietary folate, a nutrient essential for methylation, has been linked to higher risk of birth defects in the offspring.” […on the other hand, well…]

The cell cycle is divided into phases […] Transition from G1 into S phase commits the cell to division and is therefore a very tightly controlled restriction point. Withdrawal of growth factors, insufficient nucleotides, or energy to complete DNA replication, or even a damaged template DNA, would compromise the process. Problems are therefore detected and the cell cycle halted by cell cycle inhibitors before the cell has committed to DNA duplication. […] The cell cycle inhibitors inactive the kinases that promote transition through the phases, thus halting the cell cycle. […] The cell cycle can also be paused in S phase to allow time for DNA repairs to be carried out before cell division. The consequences of uncontrolled cell division are so catastrophic that evolution has provided complex checks and balances to maintain fidelity. The price of failure is apoptosis […] 50 to 70 billion cells die every day in a human adult by the controlled molecular process of apoptosis.”

“There are many diseases that arise because a particular protein is either absent or a faulty protein is produced. Administering a correct version of that protein can treat these patients. The first commercially available recombinant protein to be produced for medical use was human insulin to treat diabetes mellitus. […] (FDA) approved the recombinant insulin for clinical use in 1982. Since then over 300 protein-based recombinant pharmaceuticals have been licensed by the FDA and the European Medicines Agency (EMA) […], and many more are undergoing clinical trials. Therapeutic proteins can be produced in bacterial cells but more often mammalian cells such as the Chinese hamster ovary cell line and human fibroblasts are used as these hosts are better able to produce fully functional human protein. However, using mammalian cells is extremely expensive and an alternative is to use live animals or plants. This is called molecular pharming and is an innovative way of producing large amounts of protein relatively cheaply. […] In plant pharming, tobacco, rice, maize, potato, carrots, and tomatoes have all been used to produce therapeutic proteins. […] [One] class of proteins that can be engineered using gene-cloning technology is therapeutic antibodies. […] Therapeutic antibodies are designed to be monoclonal, that is, they are engineered so that they are specific for a particular antigen to which they bind, to block the antigen’s harmful effects. […] Monoclonal antibodies are at the forefront of biological therapeutics as they are highly specific and tend not to induce major side effects.”

“In gene therapy the aim is to restore the function of a faulty gene by introducing a correct version of that gene. […] a cloned gene is transferred into the cells of a patient. Once inside the cell, the protein encoded by the gene is produced and the defect is corrected. […] there are major hurdles to be overcome for gene therapy to be effective. One is the gene construct has to be delivered to the diseased cells or tissues. This can often be difficult […] Mammalian cells […] have complex mechanisms that have evolved to prevent unwanted material such as foreign DNA getting in. Second, introduction of any genetic construct is likely to trigger the patient’s immune response, which can be fatal […] once delivered, expression of the gene product has to be sustained to be effective. One approach to delivering genes to the cells is to use genetically engineered viruses constructed so that most of the viral genome is deleted […] Once inside the cell, some viral vectors such as the retroviruses integrate into the host genome […]. This is an advantage as it provides long-lasting expression of the gene product. However, it also poses a safety risk, as there is little control over where the viral vector will insert into the patient’s genome. If the insertion occurs within a coding gene, this may inactivate gene function. If it integrates close to transcriptional start sites, where promoters and enhancer sequences are located, inappropriate gene expression can occur. This was observed in early gene therapy trials [where some patients who got this type of treatment developed cancer as a result of it. A few more details hereUS] […] Adeno-associated viruses (AAVs) […] are often used in gene therapy applications as they are non-infectious, induce only a minimal immune response, and can be engineered to integrate into the host genome […] However, AAVs can only carry a small gene insert and so are limited to use with genes that are of a small size. […] An alternative delivery system to viruses is to package the DNA into liposomes that are then taken up by the cells. This is safer than using viruses as liposomes do not integrate into the host genome and are not very immunogenic. However, liposome uptake by the cells can be less efficient, resulting in lower expression of the gene.”

Links:

One gene–one enzyme hypothesis.
Molecular chaperone.
Protein turnover.
Isoelectric point.
Gel electrophoresis. Polyacrylamide.
Two-dimensional gel electrophoresis.
Mass spectrometry.
Proteomics.
Peptide mass fingerprinting.
Worldwide Protein Data Bank.
Nuclear magnetic resonance spectroscopy of proteins.
Immunoglobulins. Epitope.
Western blot.
Immunohistochemistry.
Crystallin. β-catenin.
Protein isoform.
Prion.
Gene expression. Transcriptional regulation. Chromatin. Transcription factor. Gene silencing. Histone. NF-κB. Chromatin immunoprecipitation.
The agouti mouse model.
X-inactive specific transcript (Xist).
Cell cycle. Cyclin. Cyclin-dependent kinase.
Retinoblastoma protein pRb.
Cytochrome c. CaspaseBcl-2 family. Bcl-2-associated X protein.
Hybridoma technology. Muromonab-CD3.
Recombinant vaccines and the development of new vaccine strategies.
Knockout mouse.
Adenovirus Vectors for Gene Therapy, Vaccination and Cancer Gene Therapy.
Genetically modified food. Bacillus thuringiensis. Golden rice.

 

May 29, 2018 Posted by | Biology, Books, Chemistry, Diabetes, Engineering, Genetics, Immunology, Medicine, Molecular biology, Pharmacology | Leave a comment