Econstudentlog

Artificial intelligence (I?)

This book was okay, but nothing all that special. In my opinion there’s too much philosophy and similar stuff in there (‘what does intelligence really mean anyway?’), and the coverage isn’t nearly as focused on technological aspects as e.g. Winfield’s (…in my opinion better…) book from the same series on robotics (which I covered here) was; I am certain I’d have liked this book better if it’d provided a similar type of coverage as did Winfield, but it didn’t. However it’s far from terrible and I liked the authors skeptical approach to e.g. singularitarianism. Below I have added some quotes and links, as usual.

“Artificial intelligence (AI) seeks to make computers do the sorts of things that minds can do. Some of these (e.g. reasoning) are normally described as ‘intelligent’. Others (e.g. vision) aren’t. But all involve psychological skills — such as perception, association, prediction, planning, motor control — that enable humans and animals to attain their goals. Intelligence isn’t a single dimension, but a richly structured space of diverse information-processing capacities. Accordingly, AI uses many different techniques, addressing many different tasks. […] although AI needs physical machines (i.e. computers), it’s best thought of as using what computer scientists call virtual machines. A virtual machine isn’t a machine depicted in virtual reality, nor something like a simulated car engine used to train mechanics. Rather, it’s the information-processing system that the programmer has in mind when writing a program, and that people have in mind when using it. […] Virtual machines in general are comprised of patterns of activity (information processing) that exist at various levels. […] the human mind can be understood as the virtual machine – or rather, the set of mutually interacting virtual machines, running in parallel […] – that is implemented in the brain. Progress in AI requires progress in defining interesting/useful virtual machines. […] How the information is processed depends on the virtual machine involved. [There are many different approaches.] […] In brief, all the main types of AI were being thought about, and even implemented, by the late 1960s – and in some cases, much earlier than that. […] Neural networks are helpful for modelling aspects of the brain, and for doing pattern recognition and learning. Classical AI (especially when combined with statistics) can model learning too, and also planning and reasoning. Evolutionary programming throws light on biological evolution and brain development. Cellular automata and dynamical systems can be used to model development in living organisms. Some methodologies are closer to biology than to psychology, and some are closer to non-reflective behaviour than to deliberative thought. To understand the full range of mentality, all of them will be needed […]. Many AI researchers [however] don’t care about how minds work: they seek technological efficiency, not scientific understanding. […] In the 21st century, […] it has become clear that different questions require different types of answers”.

“State-of-the-art AI is a many-splendoured thing. It offers a profusion of virtual machines, doing many different kinds of information processing. There’s no key secret here, no core technique unifying the field: AI practitioners work in highly diverse areas, sharing little in terms of goals and methods. […] A host of AI applications exist, designed for countless specific tasks and used in almost every area of life, by laymen and professionals alike. Many outperform even the most expert humans. In that sense, progress has been spectacular. But the AI pioneers weren’t aiming only for specialist systems. They were also hoping for systems with general intelligence. Each human-like capacity they modelled — vision, reasoning, language, learning, and so on — would cover its entire range of challenges. Moreover, these capacities would be integrated when appropriate. Judged by those criteria, progress has been far less impressive. […] General intelligence is still a major challenge, still highly elusive. […] problems can’t always be solved merely by increasing computer power. New problem-solving methods are often needed. Moreover, even if a particular method must succeed in principle, it may need too much time and/or memory to succeed in practice. […] Efficiency is important, too: the fewer the number of computations, the better. In short, problems must be made tractable. There are several basic strategies for doing that. All were pioneered by classical symbolic AI, or GOFAI, and all are still essential today. One is to direct attention to only a part of the search space (the computer’s representation of the problem, within which the solution is assumed to be located). Another is to construct a smaller search space by making simplifying assumptions. A third is to order the search efficiently. Yet another is to construct a different search space, by representing the problem in a new way. These approaches involve heuristics, planning, mathematical simplification, and knowledge representation, respectively. […] Often, the hardest part of AI problem solving is presenting the problem to the system in the first place. […] the information (‘knowledge’) concerned must be presented to the system in a fashion that the machine can understand – in other words, that it can deal with. […] AI’s way of doing this are highly diverse.”

“The rule-baed form of knowledge representation enables programs to be built gradually, as the programmer – or perhaps an AGI system itself – learns more about the domain. A new rule can be added at any time. There’s no need to rewrite the program from scratch. However, there’s a catch. If the new rule isn’t logically consistent with the existing ones, the system won’t always do what it’s supposed to do. It may not even approximate what it’s supposed to do. When dealing with a small set of rules, such logical conflicts are easily avoided, but larger systems are less transparent. […] An alternative form of knowledge representation for concepts is semantic networks […] A semantic network links concepts by semantic relations […] semantic networks aren’t the same thing as neural networks. […] distributed neural networks represent knowledge in a very different way. There, individual concepts are represented not by a single node in a carefully defined associative net, but by the changing patterns of activity across an entire network. Such systems can tolerate conflicting evidence, so aren’t bedevilled by the problems of maintaining logical consistency […] Even a single mind involves distributed cognition, for it integrates many cognitive, motivational, and emotional subsystems […] Clearly, human-level AGI would involve distributed cognition.”

“In short, most human visual achievements surpass today’s AI. Often, AI researchers aren’t clear about what questions to ask. For instance, think about folding a slippery satin dress neatly. No robot can do this (although some can be instructed, step by step, how to fold an oblong terry towel). Or consider putting on a T-shirt: the head must go in first, and not via a sleeve — but why? Such topological problems hardly feature in AI. None of this implies that human-level computer vision is impossible. But achieving it is much more difficult than most people believe. So this is a special case of the fact noted in Chapter 1: that AI has taught us that human minds are hugely richer, and more subtle, than psychologists previously imagined. Indeed, that is the main lesson to be learned from AI. […] Difficult though it is to build a high-performing AI specialist, building an AI generalist is orders of magnitude harder. (Deep learning isn’t the answer: its aficionados admit that ‘new paradigms are needed’ to combine it with complex reasoning — scholarly code for ‘we haven’t got a clue’.) That’s why most AI researchers abandoned that early hope, turning instead to multifarious narrowly defined tasks—often with spectacular success.”

“Some machine learning uses neural networks. But much relies on symbolic AI, supplemented by powerful statistical algorithms. In fact, the statistics really do the work, the GOFAI merely guiding the worker to the workplace. Accordingly, some professionals regard machine learning as computer science and/or statistics —not AI. However, there’s no clear boundary here. Machine learning has three broad types: supervised, unsupervised, and reinforcement learning. […] In supervised learning, the programmer ‘trains’ the system by defining a set of desired outcomes for a range of inputs […], and providing continual feedback about whether it has achieved them. The learning system generates hypotheses about the relevant features. Whenever it classifies incorrectly, it amends its hypothesis accordingly. […] In unsupervised learning, the user provides no desired outcomes or error messages. Learning is driven by the principle that co-occurring features engender expectations that they will co-occur in future. Unsupervised learning can be used to discover knowledge. The programmers needn’t know what patterns/clusters exist in the data: the system finds them for itself […but even though Boden does not mention this fact, caution is most definitely warranted when applying such systems/methods to data (..it remains true that “Truth and true models are not statistically identifiable from data” – as usual, the go-to reference here is Burnham & Anderson)]. Finally, reinforcement learning is driven by analogues of reward and punishment: feedback messages telling the system that what it just did was good or bad. Often, reinforcement isn’t simply binary […] Given various theories of probability, there are many different algorithms suitable for distinct types of learning and different data sets.”

“Countless AI applications use natural language processing (NLP). Most focus on the computer’s ‘understanding’ of language that is presented to it, not on its own linguistic production. That’s because NLP generation is even more difficult than NLP acceptance [I had a suspicion this might be the case before reading the book, but I didn’t know – US]. […] It’s now clear that handling fancy syntax isn’t necessary for summarizing, questioning, or translating a natural-language text. Today’s NLP relies more on brawn (computational power) than on brain (grammatical analysis). Mathematics — specifically, statistics — has overtaken logic, and machine learning (including, but not restricted to, deep learning) has displaced syntactic analysis. […] In modern-day NLP, powerful computers do statistical searches of huge collections (‘corpora’) of texts […] to find word patterns both commonplace and unexpected. […] In general […], the focus is on words and phrases, not syntax. […] Machine-matching of languages from different language groups is usually difficult. […] Human judgements of relevance are often […] much too subtle for today’s NLP. Indeed, relevance is a linguistic/conceptual version of the unforgiving ‘frame problem‘ in robotics […]. Many people would argue that it will never be wholly mastered by a non-human system.”

“[M]any AI research groups are now addressing emotion. Most (not quite all) of this research is theoretically shallow. And most is potentially lucrative, being aimed at developing ‘computer companions’. These are AI systems — some screen-based, some ambulatory robots — designed to interact with people in ways that (besides being practically helpful) are affectively comfortable, even satisfying, for the user. Most are aimed at the elderly and/or disabled, including people with incipient dementia. Some are targeted on babies or infants. Others are interactive ‘adult toys’. […] AI systems can already recognize human emotions in various ways. Some are physiological: monitoring the person’s breathing rate and galvanic skin response. Some are verbal: noting the speaker’s speed and intonation, as well as their vocabulary. And some are visual: analysing their facial expressions. At present, all these methods are relatively crude. The user’s emotions are both easily missed and easily misinterpreted. […] [An] point [point], here [in the development and evaluation of AI], is that emotions aren’t merely feelings. They involve functional, as well as phenomenal, consciousness […]. Specifically, they are computational mechanisms that enable us to schedule competing motives – and without which we couldn’t function. […] If we are ever to achieve AGI, emotions such as anxiety will have to be included – and used.”

[The point made in the book is better made in Aureli et al.‘s book, especially the last chapters to which the coverage in the linked post refer. The point is that emotions enable us to make better decisions, or perhaps even to make a decision in the first place; the emotions we feel in specific contexts will tend not to be even remotely random, rather they will tend to a significant extent to be Nature’s (…and Mr. Darwin’s) attempt to tell us how to handle a specific conflict of interest in the ‘best’ manner. You don’t need to do the math, your forebears did it for you, which is why you’re now …angry, worried, anxious, etc. If you had to do the math every time before you made a decision, you’d be in trouble, and emotions provide a great shortcut in many contexts. The potential for such short-cuts seems really important if you want an agent to act intelligently, regardless of whether said agent is ‘artificial’ or not. The book very briefly mentions a few of Minsky’s thoughts on these topics, and people who are curious could probably do worse than read some of his stuff. This book seems like a place to start.]

Links:

GOFAI (“Good Old-Fashioned Artificial Intelligence”).
Ada Lovelace. Charles Babbage. Alan Turing. Turing machine. Turing test. Norbert WienerJohn von Neumann. W. Ross Ashby. William Grey Walter. Oliver SelfridgeKenneth Craik. Gregory Bateson. Frank Rosenblatt. Marvin Minsky. Seymour Papert.
A logical calculus of the ideas immanent in nervous activity (McCulloch & Pitts, 1943).
Propositional logic. Logic gate.
Arthur Samuel’s checkers player. Logic Theorist. General Problem Solver. The Homeostat. Pandemonium architecture. Perceptron. Cyc.
Fault-tolerant computer system.
Cybernetics.
Programmed Data Processor (PDP).
Artificial life.
Forward chaining. Backward chaining.
Rule-based programming. MYCIN. Dendral.
Semantic network.
Non-monotonic logic. Fuzzy logic.
Facial recognition system. Computer vision.
Bayesian statistics.
Helmholtz machine.
DQN algorithm.
AlphaGo. AlphaZero.
Human Problem Solving (Newell & Simon, 1970).
ACT-R.
NELL (Never-Ending Language Learning).
SHRDLU.
ALPAC.
Google translate.
Data mining. Sentiment analysis. Siri. Watson (computer).
Paro (robot).
Uncanny valley.
CogAff architecture.
Connectionism.
Constraint satisfaction.
Content-addressable memory.
Graceful degradation.
Physical symbol system hypothesis.

Advertisements

January 10, 2019 Posted by | Biology, Books, Computer science, Engineering, Language, Mathematics, Papers, Psychology, Statistics | Leave a comment

Books 2018

Below I have added a list of books I read in 2018, as well as some comments and observations. As usual ‘f’ = fiction, ‘m’ = miscellaneous, ‘nf’ = non-fiction; the numbers in parentheses indicate my goodreads ratings of the books (from 1-5). The post contains links to the books’ goodreads profiles as well as links to reviews and blog posts I’ve written about them.

Of the 150 books I read, 40 were non-fiction, 108 were fiction, and 2 I categorized as miscellaneous. You can see an overview of the books on goodreads here. According to the goodreads count, I read 42,069 pages during the year, or slightly more than 115 pages per day. This is significantly less than the -count for 2017, where I read ~125 pages/day. The average page count of the books I read was 280 pages, with a minimum page count of 120 pages and a maximum page count of 934.

2018 was the year where I finished Patrick O’Brian’s Aubrey & Maturin series, as I read the last 17 of the novels in that series. Other fiction authors I’ve read include Tom Holt (33 books), Erle Stanley Gardner (19), P. G. Wodehouse (18) (I’d read all of the Wodehouse books before, but I see no reason not to include them in this list/count despite this fact), and Ngaio Marsh (14). I don’t actually think either Marsh or Gardner’s books are all that good, a fact the ratings of the books below should also indicate, but I don’t like spending a lot of time looking for new books to read and new authors to try out and given that, they were sort of ‘good enough at the time’; however I can’t really recommend these authors. O’Brian’s a different matter, as is Wodehouse – and the good Tom Holt books are actually very decent, even if he’s also written a few which were certainly nothing special.

I have been less active this year than I have been in recent years on the blog, but I did post a total of 83 book-related posts on the blog during 2018 (one every 4,4 days on average). As usual not all book-related posts published on the blog this year are included in/-linked from this post; the posts in which I have included lists of new/interesting words I’ve encountered while reading mainly fiction books are not included, and neither are the posts containing quotes and aphorisms from books I read – I decided some time ago that it’s too much work trying to link that kind of stuff up here as well; people who are interested can check out the relevant categories themselves if they feel like it.

I may update this post later on with more detailed information about my non-fiction reading during the year and/or perhaps some information about the books I did not manage to finish during the year.

1. Complexity: A Very Short Introduction (nf. Oxford University Press). Blog coverage here.

2. Rivers: A Very Short Introduction (1, nf. Oxford University Press). Short goodreads review here. Blog coverage here and here.

3. Something for the Pain: Compassion and Burnout in the ER (2, m. W. W. Norton & Company/Paul Austin).

4. Mountains: A Very Short Introduction (1, nf. Oxford University Press). Short goodreads review here.

5. Water: A Very Short Introduction (4, nf. Oxford University Press). Goodreads review here.

6. Assassin’s Quest (3, f). Robin Hobb. Goodreads review here.

7. Oxford Handbook of Endocrinology and Diabetes (3rd edition) (5, nf. Oxford University Press). Goodreads review here. Blog coverage here, here, here, here, here, and here. I added this book to my list of favourite books on goodreads. Some of the specific chapters included are ‘book-equivalents’; this book is very long and takes a lot of work.

8. Desolation Island (Aubrey & Maturin #5) (3, f). Patrick O’Brian.

9. The Fortune of War (Aubrey & Maturin #6) (4, f). Patrick O’Brian.

10. Lakes: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here and here.

11. The Surgeon’s Mate (Aubrey & Maturin #7) (4, f). Patrick O’Brian. Short goodreads review here.

12. Domestication of Plants in the Old World: The Origin and Spread of Domesticated Plants in South-West Asia, Europe, and the Mediterranean Basin (5, nf. Oxford University Press). Goodreads review here. I added this book to my list of favourite books on goodreads.

13. The Ionian Mission (Aubrey & Maturin #8) (4, f). Patrick O’Brian.

14. Systems Biology: Functional Strategies of Living Organisms (4, nf. Springer). Blog coverage here, here, and here.

15. Treason’s Harbour (Aubrey & Maturin #9) (4, f). Patrick O’Brian.

16. Peripheral Neuropathy – A New Insight into the Mechanism, Evaluation and Management of a Complex Disorder (3, nf. InTech). Blog coverage here and here.

17. The portable door (5, f). Tom Holt. Goodreads review here.

18. Prevention of Late-Life Depression: Current Clinical Challenges and Priorities (2, nf. Humana Press). Blog coverage here and here.

19. In your dreams (4, f). Tom Holt.

20. Earth, Air, Fire and Custard (3, f). Tom Holt. Short goodreads review here.

21. You Don’t Have to Be Evil to Work Here, But it Helps (3, f). Tom Holt.

22. The Ice Age: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here and here.

23. The Better Mousetrap (4, f). Tom Holt.

24. May Contain Traces of Magic (2, f). Tom Holt.

25. Expecting Someone Taller (4, f). Tom Holt.

26. The Computer: A Very Short Introduction (2, nf. Oxford University Press). Short goodreads review here. Blog coverage here.

27. Who’s Afraid of Beowulf? (5, f). Tom Holt.

28. Flying Dutch (4, f). Tom Holt.

29. Ye Gods! (2, f). Tom Holt.

30. Marine Biology: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here and here.

31. Here Comes The Sun (2, f). Tom Holt.

32. Grailblazers (4, f). Tom Holt.

33. Oceans: A Very Short Introduction (2, nf. Oxford University Press). Very short goodreads review here. Blog coverage and here.

34. Oxford Handbook of Medical Statistics (2, nf. Oxford University Press). Long, takes some work. Goodreads review here. Blog coverage here, here, and here.

35. Faust Among Equals (3, f). Tom Holt.

36. My Hero (3, f). Tom Holt. Short goodreads review here.

37. Odds and Gods (3, f). Tom Holt.

38. Networks: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.

39. Paint Your Dragon (2, f). Tom Holt. Very short goodreads review here.

40. Wish You Were Here (2, f). Tom Holt.

41. Djinn Rummy (2, f). Tom Holt.

42. Structural Engineering: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

43. Open Sesame (3, f). Tom Holt.

44. The Far Side of the World (Aubrey & Maturin #10) (4, f). Patrick O’Brian.

45. 100 Cases in Surgery (3, nf. CRC Press). Blog coverage here and here.

46. The Reverse of the Medal (Aubrey & Maturin #11) (3, f). Patrick O’Brian.

47. 100 Cases in Emergency Medicine and Critical Care (4, nf. CRC Press). Blog coverage here and here.

48. The Letter of Marque (Aubrey & Maturin #12) (4, f). Patrick O’Brian.

49. Molecular Biology: A Very Short Introduction (5, nf. Oxford University Press). Short goodreads review here. Blog coverage here, here, and here.

50. Frozen Assets (4, f). P. G. Wodehouse.

51. Galahad at Blandings (5, f). P. G. Wodehouse.

52. Spring Fever (4, f). P. G. Wodehouse.

53. Intermediate Excel (Excel Essentials, #2) (nf., M.L. Humphrey). Goodreads review here.

54. A Gentleman of Leisure (3, f). P. G. Wodehouse.

55. Bachelors Anonymous (5, f). P. G. Wodehouse.

56. Money in the Bank (5, f). P. G. Wodehouse.

57. Company for Henry (4, f). P. G. Wodehouse.

58. The Old Reliable (4, f). P. G. Wodehouse.

59. The Thirteen-Gun Salute (Aubrey & Maturin #13) (4, f). Patrick O’Brian.

60. Alcohol and Aging: Clinical and Public Health Perspectives (3, nf. Springer). Blog coverage here and here.

61. Ice in the Bedroom (5, f). P. G. Wodehouse.

62. Enter a Murderer (3, f). Ngaio Marsh.

63. The Nursing Home Murder (2, f). Ngaio Marsh. Very short goodreads review here.

64. Death in Ecstasy (2, f). Ngaio Marsh.

65. Vintage Murder (3, f). Ngaio Marsh.

66. Blood: A Very Short Introduction (3, nf. Oxford University Press). Short goodreads review here. Blog coverage here and here.

67. Artists in Crime (3, f). Ngaio Marsh.

68. Developmental Biology: A Very Short Introduction (5, nf. Oxford University Press). Very short goodreads review here. Blog coverage here and here.

69. Death in a White Tie (2, f). Ngaio Marsh.

70. Robotics: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.

71. Overture to death (3, f). Ngaio Marsh. Very short goodreads review here.

72. Death at the bar (3, f). Ngaio Marsh.

73. 100 Cases in Orthopaedics and Rheumatology (2, nf. CRC Press). Blog coverage here and here.

74. A Surfeit of Lampreys (4, f). Ngaio Marsh.

75. Managing Gastrointestinal Complications of Diabetes (4, nf. Adis (/Springer)). Blog coverage here and here.

76. Colour Scheme (2, f). Ngaio Marsh.

77. American Naval History: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here and here.

78. The Nutmeg of Consolation (Aubrey & Maturin #14) (4, f). Patrick O’Brian.

79. Blonde Bombshell (4, f). Tom Holt.

80. Big Data: A Very Short Introduction (2, nf. Oxford University Press). Blog coverage here.

81. Little People (2, f). Tom Holt.

82. Life, Liberty, and the Pursuit of Sausages (4, f). Tom Holt.

83. Pocket Oncology (2, nf. Wolters Kluwer Health). Goodreads review here. Dense, very informative, takes a lot of work to read from cover to cover. There were very specific reasons why I did not give this book a much higher rating, see the goodreads review for details. Blog coverage here and here.

84. The Girl in Blue (4, f). P. G. Wodehouse.

85. Service With a Smile (5, f). P. G. Wodehouse.

86. Military Strategy: A Very Short Introduction (3, nf. Oxford University Press).

87. Died in the Wool (2, f). Ngaio Marsh.

88. When It’s A Jar (3, f). Tom Holt.

89. The Truelove (Aubrey & Maturin #15) (3, f). Patrick O’Brian.

90. Combinatorics: A Very short Introduction (4, nf. Oxford University Press). Blog coverage here and here.

91. The Wine-Dark Sea (Aubrey & Maturin #16) (3, f). Patrick O’Brian.

92. Jeeves in the Offing (4, f). P. G. Wodehouse.

93. Blandings Castle … and Elsewhere (Blandings Castle #3) (4, f). P. G. Wodehouse.

94. Something Fresh (5, f). P. G. Wodehouse.

95. Psmith, Journalist (4, f). P. G. Wodehouse.

96. Personal Relationships: The Effect on Employee Attitudes, Behavior, and Well-being (SIOP Organizational Frontiers Series) (2, nf. Routledge). Long. Blog coverage here, here, and here.

97. Final Curtain (2, f). Ngaio Marsh.

98. Circadian Rhythms: A Very Short Introduction (4, nf. Oxford University Press). Short goodreads review here. Blog coverage here and here.

99. A Wreath for Rivera (2, f). Ngaio Marsh. Very short goodreads review here.

100. The Commodore (Aubrey & Maturin #17) (3, f). Patrick O’Brian.

101. Quick Service (4, f). P. G. Wodehouse.

102. The Yellow Admiral (Aubrey & Maturin #18) (3, f). Patrick O’Brian.

103. French Leave (4, f). P. G. Wodehouse.

104. Jingo (5, f). Terry Pratchett. I added this book to my list of favourite books on goodreads.

105. Principles of Memory (2, nf. Psychology Press). Blog coverage here, here, and here.

106. Spring Fever (4, f). P. G. Wodehouse.

107. The Hundred Days (Aubrey & Maturin #19) (2, f). Patrick O’Brian.

108. Night at the Vulcan (2, f). Ngaio Marsh.

109. The Overcoat and Other Short Stories (3, f). Nikolai Gogol.

110. Feet of Clay (4, f). Terry Pratchett.

111. Blue at the Mizzen (Aubrey & Maturin #20) (4, f). Patrick O’Brian. Very short goodreads review here.

112. The Case of the Velvet Claws (Perry Mason #1) (3, f). Erle Stanley Gardner.

113. Perception: A Very Short Introduction (3, nf. Oxford University Press). Short goodreads review here. Blog coverage here.

114. The Case of the Sulky Girl (Perry Mason #2) (3, f). Erle Stanley Gardner.

115. The Case of the Howling Dog (2, f). Erle Stanley Gardner.

116. 21: The Final Unfinished Voyage of Jack Aubrey (Aubrey & Maturin #21) (2, f). Patrick O’Brian. Short goodreads review here.

117. The Case of the Curious Bride (2, f). Erle Stanley Gardner.

118. Sleep: A Very Short Introduction (2, nf. Oxford University Press).

119. The Case of the Counterfeit Eye (2, f). Erle Stanley Gardner.

120. The Case of the Caretaker’s Cat (3, f). Erle Stanley Gardner.

121. The Case of the Sleepwalker’s Niece (2, f). Erle Stanley Gardner.

122. The Case of the Dangerous Dowager (3, f). Erle Stanley Gardner.

123. The Case of the Substitute Face (2, f). Erle Stanley Gardner.

124. Geophysics: A Very Short Introduction (5, nf. Oxford University Press). Very short goodreads review here. Blog coverage here and here.

125. The Case of the Lame Canary (2, f). Erle Stanley Gardner.

126. The Case of the Rolling Bones (3, f). Erle Stanley Gardner.

127. The Case of the Silent Partner (2, f). Erle Stanley Gardner. Goodreads review here.

128. The Case of the Haunted Husband (2, f). Erle Stanley Gardner.

129. Reaper Man (5, f). Terry Pratchett.

130. The Case of the Empty Tin (2, f). Erle Stanley Gardner.

131. The Case of the Drowning Duck (2, f). Erle Stanley Gardner.

132. Early Riser (5, f). Jasper Fforde.

133. Bacteria: A Very Short Introduction (3, nf. Oxford University Press).

134. The Outsorcerer’s Apprentice (5, f). Tom Holt. Goodreads review here.

135. Doughnut (3, f). Tom Holt.

136. The Case of the Buried Clock (1, f). Erle Stanley Gardner.

137. The Good, the Bad and the Smug (4, f). Tom Holt.

138. The Management Style of the Supreme Beings (3, f). Tom Holt.

139. Nothing But Blue Skies (2, f). Tom Holt.

140. Leaving Las Vegas (5, f). John O’Brien. Short goodreads review here.

141. Overtime (2, f). Tom Holt.

142. Barking (2, f). Tom Holt.

143. Lucia in Wartime (1, f). Tom Holt. Short goodreads review here.

144. Falling Sideways (2, f). Tom Holt. Very short goodreads review here.

145. Snow White and the Seven Samurai (4, f). Tom Holt.

146. The Case of the Lonely Heiress (3, f). Erle Stanley Gardner.

147. The Case of the Crooked Candle (2, f). Erle Stanley Gardner.

148. The Case of the Gilded Lily (2, f). Erle Stanley Gardner.

149. The Science of Discworld (2, m.). Terry Pratchett, Ian Stewart & Jack Cohen. Goodreads review here.

150. Artificial Intelligence: A Very Short Introduction (2, nf. Oxford University Press).

January 3, 2019 Posted by | Books, Personal | Leave a comment

Kinematics of Circumgalactic Gas – Crystal Martin

 

A few links related to the lecture coverage:

The green valley is a red herring: Galaxy Zoo reveals two evolutionary pathways towards quenching of star formation in early- and late-type galaxies (Schawinski et al, 2014).
The Large, Oxygen-Rich Halos of Star-Forming Galaxies Are A Major Reservoir of Galactic Metals (Tumlinson et al, 2011).
Gas in galactic halos (Dettmar, 2012).
Gaseous Galaxy Halos (Putman, Peek & Joung, 2012).
The kinematic connection between QSO-absorbing gas and galaxies at intermediate redshift (Steidel et al. 2002).
W. M. Keck Observatory.
Sloan Digital Sky Survey.
Virial mass.
Kinematics of Circumgalactic Gas (the lecturer is a co-author of this presentation).
Kinematics of Circumgalactic Gas: Quasars Probing the Inner CGM of z=0.2 Galaxies (-ll-). Here’s the paper: Quasars Probing Galaxies. I. Signatures of Gas Accretion at Redshift z ≈ 0.2 (Ho, Martin, Kacprzak & Churchill, 2017).
MAGIICAT III. Interpreting Self-Similarity of the Circumgalactic Medium with Virial Mass using MgII Absorption (Nielsen et al, 2013).
Fiducial marker.
Gas kinematics, morphology and angular momentum in the FIRE simulations (El-Badry et al, 2018).

December 22, 2018 Posted by | Astronomy, Lectures, Physics | Leave a comment

Geophysics (II)

In the post I have added some observations from- and links related to the last half of the book’s coverage.

“It is often […] useful to describe a force in terms of the acceleration it produces. Acceleration is the rate of change of velocity; however, when a force acts on a body with a given mass, acceleration is also the force experienced by each unit of mass. For example, a 100 kg man weighs ten times more than a 10 kg child but each experiences the same gravitational acceleration, which is a property of the Earth. The gravitational and centrifugal accelerations have different directions: gravitational acceleration acts inwards towards the Earth’s centre, whereas centrifugal acceleration acts outwards away from the rotation axis. Gravity is the acceleration that results from combining these two accelerations. The direction of gravity defines the local vertical direction […] and thereby the horizontal plane. Due to the different directions of its component accelerations, gravity rarely acts radially towards the centre of the Earth; it only does so at the poles and at the equator. For similar reasons the value of gravity varies with latitude. […] The end result is that gravity is about 0.5 per cent stronger at the poles than at the equator. […] Using the measured values of gravity and the Earth’s radius, in conjunction with the gravitational constant, the mass and volume of the Earth can be obtained. Combining these gives a mean density for the Earth of 5,515 kg/m3. The average density of surface rocks is only half of this value, which implies that density must increase with depth in the Earth. This was an important discovery for scientists concerned with the size and shape of the Earth in the 18th and early 19th centuries. The variation of density with depth in the layered Earth […] was later established from the interpretation of P- and S-wave seismic velocities and the analysis of free oscillations.”

“The Moon’s influence on the Earth’s rotation is stronger than that of the Sun or the other planets in the solar system. The centre of mass of the Earth–Moon pair, called the barycentre, lies at about 4,600 km from the Earth’s centre — well within the Earth’s radius of 6,371 km. The Earth and Moon rotate about this point […]. The elliptical orbit of the Earth about the Sun is in reality the track followed by the barycentre. The rotation of the Earth–Moon pair about their barycentre causes a centrifugal acceleration in the Earth that is directed away from the Moon. The lunar gravitational attraction opposes this and the combined effect is to deform the equipotential surface of the tide and draw it out into the shape of a prolate ellipsoid, resembling a rugby ball. Consequently there is a tidal bulge on the far side of the Earth from the Moon, complementary to the tidal bulge on the near side. The bulges are unequal in size. Each day the Earth rotates under both tidal bulges, so that two unequal tides are experienced; they are resolved into a daily (diurnal) tide and a twice-daily (semi-diurnal) tide. Although we think of the tides as a fluctuation of sea level, they also take place in the solid planet, where they are known as bodily earth tides. These are manifest as displacements of the solid surface by up to 38 cm vertically and 5 cm horizontally. The Sun also contributes to the tides, creating semi-annual and annual components. […] The displacements of fluid and solid mass have a braking effect on the Earth’s rotation, slowing it down and gradually increasing the length of the day, currently at about 1.8 milliseconds per century. […] The reciprocal effect of the Earth’s gravitation on the Moon has slowed lunar rotation about its own axis to the extent that the Moon’s spin now has the same period as its rotation about the Earth. That is why it always presents the same face to us. Conservation of angular momentum results in a transfer of angular momentum from the Earth to the Moon, which is accomplished by an increase in the Earth–Moon distance of about 3.7 cm/yr (roughly the rate at which fingernails grow), and by a slowing of the Moon’s rotation rates about its own axis and about the Earth. In time, all three rotations will be synchronous, with a period of 48 present Earth-days. The Moon will then be stationary over the Earth and both bodies will present the same face to each other.”

“[I]sostatic compensation causes the crust to move vertically to seek a new hydrostatic equilibrium in response to changes in the load on the crust. Thus, when erosion removes surface material or when an ice-cap melts, the isostatic response is uplift of the mountain. Examples of this uplift are found in northern Canada and Fennoscandia, which were covered by a 1–2 kilometre-thick ice sheet during the last ice age; the surface load depressed the crust in these regions by up to 500 m. The ice age ended about 10,000 years ago, and subsequent postglacial isostatic adjustment has resulted in vertical crustal movements. The land uplift was initially faster than it is today, but it continues at rates of up to 9 mm/yr in Scandinavia and Finland […]. The phenomenon has been observed for decades by repeated high-precision levelling campaigns. […] the increase of temperature with depth results in anelastic behaviour of the deeper lithosphere. This is the same kind of behaviour that causes attenuation of seismic waves in the Earth […] A specific type of anelastic behaviour is viscoelasticity. In this mechanism a material responds to short-duration stresses in the same way that an elastic body does, but over very long time intervals it flows like a sticky viscous fluid. The flow of otherwise solid material in the mantle is understood to be a viscoelastic process. This type of behaviour has been invoked to explain the response of the upper mantle to the loading of northern Canada and Fennoscandia by the ice sheets. In each region the weight of an ice sheet depressed the central area, forcing it down into the mantle. The displaced mantle caused the surrounding land to bulge upward slightly, as a jelly does around a point where it is pressed down. As a result of postglacial relaxation the opposite motion is now happening: the peripheral bulge is sinking while the central region is being uplifted.”

“The molecules of an object are in constant motion and the energy of this motion is called kinetic energy. Temperature is a measure of the average kinetic energy of the molecules in a given volume. […] The total energy of motion of all the molecules in a volume is its internal energy. When two objects with different temperatures are in contact, they exchange internal energy until they have the same temperature. The energy transferred is the amount of heat exchanged. Thus, if heat is added to an object, its kinetic energy is increased, the motions of individual atoms and molecules speed up, and its temperature rises. Heat is a form of energy and is therefore measured in the standard energy unit, the joule. The expenditure of one joule per second defines a watt, the unit of power. […] The amount of geothermal heat flowing per second across a unit of surface area of the Earth is called the geothermal flux, or more simply the heat flow. It is measured in mW/m2. The Earth’s internal heat is its greatest source of energy. It powers global geological processes such as plate tectonics and the generation of the geomagnetic field. The annual amount of heat flowing out of the Earth is more than 100 times greater than the elastic energy released in earthquakes and ten times greater than the loss of kinetic energy as the planet’s rotation slows due to tidal friction. Although the solar radiation that falls on the Earth is a much larger source of energy, it is important mainly for its effect on natural processes at or above the Earth’s surface. The atmosphere and clouds reflect or absorb about 45 per cent of solar radiation, and the land and ocean surfaces reflect a further 5 per cent and absorb 50 per cent. Almost all of the energy absorbed at the surface and in the clouds and atmosphere is radiated back into space. The solar energy that reaches the surface penetrates only a short distance into the ground, because water and rocks are poor conductors of heat. […] The daily temperature fluctuation in rocks and sediments sinks to less than 1 per cent of its surface amplitude in a depth of only 1 metre. The annual seasonal change of temperature penetrates some nineteen times deeper, but its effects are barely felt below 20 m.”

“The [Earth’s] internal heat arises from two sources. Part is produced at the present time by radioactivity in crustal rocks and in the mantle, and part is primordial. […] The internal heat has to find its way out of the Earth. The three basic forms of heat transfer are radiation, conduction, and convection. Heat is also transferred in compositional and phase transitions. […] Heat is transported throughout the interior by conduction, and convection plays an important role in the mantle and fluid outer core. […] Heat transport by conduction is most important in solid regions of the Earth. Thermal conduction takes place by transferring energy in the vibrations of atoms, or in collisions between molecules, without bodily displacement of the material. The flow of heat through a material by conduction depends on two quantities: the rate at which temperature increases with depth (the temperature gradient), and the material’s ability to conduct heat, a physical property known as thermal conductivity. The product of the temperature gradient and the thermal conductivity defines the heat flow. […] Heat flow varies greatly over the Earth’s surface depending on the local geology and tectonic situation. The estimated average heat flow is 92 mW/m2. Multiplying this value by the Earth’s surface area, which is about 510 million km2, gives a global heat loss of about 47,000 GW […]. For comparison, the energy production of a large nuclear power plant is about 1 GW.”

An adiabatic thermal process is one in which heat is neither gained nor lost. This can be the case when a process occurs too quickly to allow heat to be exchanged, as in the rapid compressions and expansions during the passage of a seismic wave. The variation of temperature with depth under adiabatic conditions defines the adiabatic temperature gradient. […] Consider what would happen in a fluid if the temperature increases with depth more rapidly than the adiabatic gradient. If a small parcel of material at a particular depth is moved upward adiabatically to a shallower depth, it experiences a drop in pressure corresponding to the depth difference and a corresponding adiabatic decrease in temperature. However, the decrease is not as large as required by the real temperature gradient, so the adiabatically displaced parcel is now hotter and less dense than its environment. It experiences a buoyant uplift and continues to rise, losing heat and increasing in density until it is in equilibrium with its surroundings. Meanwhile, cooler material adjacent to its original depth fills the vacated place, closing the cycle. This process of heat transport, in which material and heat are transported together, is thermal convection. Eventually the loss of heat by convection brings the real temperature gradient close to the adiabatic gradient. Consequently, a well-mixed, convecting fluid has a temperature profile close to the adiabatic curve. Convection is the principal method of heat transport in the Earth’s fluid outer core. Convection is also an important process of heat transport in the mantle. […] Mantle convection plays a crucial role in the cooling history and evolution of the planet.”

“It is important to appreciate the timescale on which flow occurs in the mantle. The rate is quite different from the familiar flow of a sticky liquid, such as blood or motor oil […]. The mantle is vastly stiffer. Estimates of viscosity for the lower mantle are around 1022 Pa·s (pascal seconds), which is 1025 times that of water. This is an enormous factor (similar to the ratio of the mass of the entire Earth to a kilogram mass). The viscosity varies within the mantle, with the upper mantle about 20 times less viscous than the lower mantle. Flow takes place in the mantle by the migration of defects through the otherwise solid material. This is a slow process that produces flow rates on the order of centimetres per year. However, geological processes occur on a very long timescale, spanning tens or hundreds of millions of years. This allows convection to be an important factor in the transport of heat through the mantle.”

“The Sun has a strong magnetic field, greatly exceeding that of any planet. It arises from convection in the solar core and is sufficiently irregular that it produces regions of lower than normal temperature on the Sun’s surface, called sunspots. These affect the release of charged particles (electrons, protons, and alpha particles) from the Sun’s atmosphere. The particles are not bound to each other, but form a plasma that spreads out at supersonic speed. The flow of electric charge is called the solar wind; it is accompanied by a magnetic field known as the interplanetary magnetic field. The solar emissions are variable, controlled by changes in the Sun’s magnetic field. […] The magnetic field of a planet deflects the solar wind around it. This blocks the influx of solar radiation and prevents the atmosphere from being blown away […] Around the Earth (as well as the giant planets and Mercury) the region in which the planet’s magnetic field is stronger than the interplanetary field is called the magnetosphere; its shape resembles the bow-wave and wake of a moving ship. […] It compresses the field on the daytime side of the Earth, forming a bow shock, about 17 km thick, which deflects most of the solar wind around the planet. However, some of the plasma penetrates the barrier and forms a region called the magnetosheath; the boundary between the plasma and the magnetic field is called the magnetopause. The solar wind causes the magnetic field on the night-time side of the Earth to stretch out to form a magnetotail […] that extends several million kilometres ‘downwind’ from the Earth. Similar features characterize the magnetic fields of other planets. […] Rotation and the related Coriolis force, together with convection, are necessary factors for a self-sustaining dynamo”.

Links:
Gravity. Inertial force. Centrifugal force. Centripetal force.
Gravimeter. Gal (unit).
Reference ellipsoid. Undulation of the geoid. Satellite geodesy. Interferometric synthetic-aperture radar. Global Positioning System. Galileo. GLONASS. Differential GPS.
Gravity Recovery and Climate Experiment (GRACE). Gravity Field and Steady-State Ocean Circulation Explorer (GOCE).
Gradiometer.
Gravity surveying. Bouguer anomaly. Free-air gravity anomaly. Eötvös effect.
Isostasy.
Craton.
Solidus.
Diamond anvil cell.
Mantle plume.
Hotspot (geology).
Magnetism.
Earth’s magnetic field.
International Geomagnetic Reference Field.
Telluric current. Magnetotellurics.
SWARM mission.
Ferromagnetism. Curie point.
Paleomagnetism. Plate tectonics. Vine–Matthews–Morley hypothesis.
Geomagnetic reversal (“During the past 10 Myr there have been on average 4-5 reversals per Myr; the most recent full reversal happened 780,000 yr ago.”).
Magnetostratigraphy.

November 24, 2018 Posted by | Astronomy, Books, Geology, Physics | Leave a comment

Words

Many of the words below are words which I encountered while reading the books Reaper Man, Enter a Murderer, The Case of the Velvet Claws, The Case of the Sulky Girl, The Case of the Curious Bride, and The Thirteen Gun Salute.

Sodality. Triturate. Aboral. Cloture. Abbess. Cortege. Ideograph. Tarn. Tranche. Dexter and sinister. Prolegomenon. Animalier. Scumble. Alembic. Toxophily/toxophilite. Knurl. Sparge. Stook. Susurrous. Calcination.

Pizzicato. Valance. Ineffable. Bunnia. Hitch, Contrabandist. Recalcitrant. Admonish. Codling. Countenance. Fid. Kittiwake. Marline. Colcannon. Soffit. Spirket. Gradus. Bate. Supersession. Furlong.

Palmary. Banian. Boustrophedon. Gridiron. Sinew. Garstrake. Gumma. Hygrometer. Premonitory. Binturong. Proa. Turmeric. Gamelan. Feudatory. Clepsydra. Colophony/rosin. ShipwrightBenight. Gaur. Banteng.

Subjacent. Superjacent. Scull. Isopod. Tierer. Castrametation. Dictograph. Administratrix. Commingle. Negligee. Shyster. Cuspidor. Sanitarium. Repudiate. Res gestae. Corpus delicti. Pothook. Carouse. Withal. Probative.

November 17, 2018 Posted by | Books, Language | Leave a comment

Principles of memory (III)

I have added a few observations from the last part of the book below. This is a short post, but I was getting tired of the lack of (front-page) updates – I do apologize to the (…fortunately very-) few people who might care about this for the infrequent updates lately, I hope to have more energy for blogging in the weeks to come.

The relative distinctiveness principle states that performance on memory tasks depends on how much an item stands out “relative to the background of alternatives from which it must be distinguished” at the time of retrieval […]. It is important to keep in mind that one of the determinants of whether an item is more or less distinct at the time of retrieval, relative to other items, is the relation between the conditions at encoding and those at retrieval (the encoding retrieval principle). […] There has been a long history of distinctiveness as an important concept and research topic in memory, with numerous early studies examining both “distinctiveness” and “vividness” […]. Perhaps the most well-known example of distinctiveness is the isolation or von Restorff effect […] In the typical isolation experiment, thereare two types of lists. The control list features words presented (for example) in black against a white background. The experimental list is identical to the control list in all respects except that one of the items is made to stand out in some fashion: it could be shown in green instead of black, in a larger font size, with a large red frame around it, the word trout could appear in a list in which all the other items are names of different vegetables, or any number of similar manipulations. The much-replicated result is that the unique item (the isolate) is remembered better than the item in the same position in the control list. […] The von Restorff effect is reliably obtained with many kinds of tests, including delayed free recall, serial recall, serial learning, recognition, and many others. It is also readily observable with nonverbal stimuli”.

There have been many mathematical and computational models based on the idea of distinctiveness (see Neath & Brown, 2007, for a review). Here, we focus on one particular model that includes elements from many of the earlier models […]. Brown, Neath, and Chater (2007 […]) [Here’s a link to the paper, US] proposed a model called SIMPLE, which stands for Scale Independent Memory and Perceptual Learning. The model is scale independent in the sense that it applies equally to short-term/working memory as long-term memory: the time scale is irrelevant to the operation of the model. The basic idea is that memory is best conceived as a discrimination task. Items are represented as occupying positions along one or more dimensions and, in general, those items with fewer close neighbors on the relevant dimensions at the time of retrieval will be more likely to be recalled that items with more close neighbors. […] According to SIMPLE, […] not only is the isolation effect due to relative distinctiveness, but the general shape of the serial position curve is due to relative distinctiveness. In general, free and serial recall produce a function in which the first few items are well-recalled (the primacy effect) and the last few items are well recalled (the recency effect), but items in the middle are recalled less well. The magnitude of primacy and recency effects can be affected by many different manipulations, and depending on the experimental design, one can observe almost all possibilities, from almost all primacy to almost all recency. The thing that all have in common, however, is that the experimental manipulation has caused some items to be more distinct at the time of retrieval than other items. […] It has been claimed that distinctiveness effects are observed only in explicit memory […] We suggest that the search for distinctiveness effects in implicit memory tasks is similar to the search for interference in implicit memory tasks […]: What is important is whether the information that is supposed to be distinct (in the former case) or interfere (in the latter) is relevant to the task. If it is not relevant, then the effects will not be seen.”

“[O]lder adults perform worse than younger adults on free recall but not on recognition (Craik & McDowd, 1987), even though both tasks are considered to tap “episodic” or “explicit” memory.2 Performance on another episodic memory task, cued recall, falls in between recall and recognition, although the size of the age-related difference varies […]. Older adults also experience more tip-of-the-tongue events than younger adults […] and have more word-finding difficulties in confrontation naming tasks in which, for example, a drawing is shown and the subject is asked to name the object […]. [I]tem recall is not as affected by aging as order recall […] in comparison to younger adults, older adults have more difficulty remembering the source of a memory […] In addition to recalling fewer correct items than younger adults (errors of omission), older adults are also more likely than younger adults to produce false positive responses in a variety of paradigms (errors of commission) […] Similarly, older adults are more likely than younger to report that an imagined event was real […] Due perhaps to reduced cognitive capacity older adults may appear to encode fewer specific details at encoding and thus are less able to take advantage of specific cues at retrieval. […] older adults and individuals with amnesia perform quite well on tasks that are supported by generic processing but less well (compared to younger adults) on those that require the recollection of a specific item or event. […] when older adults are asked to recall events from their own lives, they recall more from the period when they were approximately 15 to 25 years of age than they do from the period when they were approximately 30 to 40 years of age.” [Here’s incidentally an example paper exploring some of these topics in more detail: Modeling age-related differences in immediate memory using SIMPLE. As should be obvious from the title, the paper relates to the SIMPLE model discussed in the previous paragraph, US]

There is a large literature on the relation between attention and memory, and many times “memory” is used when a perhaps more accurate term is “attention” (see, for example, Baddeley, 1993; Cowan, 1995). […] As yet, we have no principles linking memory and attention.”

Forgetting is due to extrinsic factors; in particular, items that have more close neighbors in the region of psychological space at the time of retrieval are less likely to be remembered than items with fewer close neighbors […]. In addition, tasks that require specific information about the context in which memories were formed seem to be more vulnerable to interference or forgetting at the time of the retrieval attempt than those that can rely on more general information […]. Taken together, these principles suggest that the search for the forgetting function is not likely to be successful. […] a failure to remember can be due to multiple different causes, much like the failure of a car can be due to multiple different causes.” (Do here also keep in mind the comments included on this topic in the last paragraph of my first post about the book, US)

November 13, 2018 Posted by | Books, Psychology | Leave a comment

Geophysics (I)

“Geophysics is a field of earth sciences that uses the methods of physics to investigate the physical properties of the Earth and the processes that have determined and continue to govern its evolution. Geophysical investigations cover a wide range of research fields, extending from surface changes that can be observed from Earth-orbiting satellites to unseen behaviour in the Earth’s deep interior. […] This book presents a general overview of the principal methods of geophysics that have contributed to our understanding of Planet Earth and how it works.”

I gave this book five stars on goodreads, where I deemed it: ‘An excellent introduction to the topic, with high-level yet satisfactorily detailed coverage of many areas of interest.’ It doesn’t cover these topics in the amount of detail they’re covered in books like Press & Siever (…a book which I incidentally covered, though not in much detail, here and here), but it’s a very decent introductory book on these topics. I have added some observations and links related to the first half of the book’s coverage below.

“The gravitational attractions of the other planets — especially Jupiter, whose mass is 2.5 times the combined mass of all the other planets — influence the Earth’s long-term orbital rotations in a complex fashion. The planets move with different periods around their differently shaped and sized orbits. Their gravitational attractions impose fluctuations on the Earth’s orbit at many frequencies, a few of which are more significant than the rest. One important effect is on the obliquity: the amplitude of the axial tilt is forced to change rhythmically between a maximum of 24.5 degrees and a minimum of 22.1 degrees with a period of 41,000 yr. Another gravitational interaction with the other planets causes the orientation of the elliptical orbit to change with respect to the stars […]. The line of apsides — the major axis of the ellipse — precesses around the pole to the ecliptic in a prograde sense (i.e. in the same sense as the Earth’s rotation) with a period of 100,000 yr. This is known as planetary precession. Additionally, the shape of the orbit changes with time […], so that the eccentricity varies cyclically between 0.005 (almost circular) and a maximum of 0.058; currently it is 0.0167 […]. The dominant period of the eccentricity fluctuation is 405,000 yr, on which a further fluctuation of around 100,000 yr is superposed, which is close to the period of the planetary precession.”

“The amount of solar energy received by a unit area of the Earth’s surface is called the insolation. […] The long-term fluctuations in the Earth’s rotation and orbital parameters influence the insolation […] and this causes changes in climate. When the obliquity is smallest, the axis is more upright with respect to the ecliptic than at present. The seasonal differences are then smaller and vary less between polar and equatorial regions. Conversely, a large axial tilt causes an extreme difference between summer and winter at all latitudes. The insolation at any point on the Earth thus changes with the obliquity cycle. Precession of the axis also changes the insolation. At present the north pole points away from the Sun at perihelion; one half of a precessional cycle later it will point away from the Sun at aphelion. This results in a change of insolation and an effect on climate with a period equal to that of the precession. The orbital eccentricity cycle changes the Earth–Sun distances at perihelion and aphelion, with corresponding changes in insolation. When the orbit is closest to being circular, the perihelion–aphelion difference in insolation is smallest, but when the orbit is more elongate this difference increases. In this way the changes in eccentricity cause long-term variations in climate. The periodic climatic changes due to orbital variations are called Milankovitch cycles, after the Serbian astronomer Milutin Milankovitch, who studied them systematically in the 1920s and 1930s. […] The evidence for cyclical climatic variations is found in geological sedimentary records and in long cores drilled into the ice on glaciers and in polar regions. […] Sedimentation takes place slowly over thousands of years, during which the Milankovitch cycles are recorded in the physical and chemical properties of the sediments. Analyses of marine sedimentary sequences deposited in the deep oceans over millions of years have revealed cyclical variations in a number of physical properties. Examples are bedding thickness, sediment colour, isotopic ratios, and magnetic susceptibility. […] The records of oxygen isotope ratios in long ice cores display Milankovitch cycles and are important evidence for the climatic changes, generally referred to as orbital forcing, which are brought about by the long-term variations in the Earth’s orbit and axial tilt.”

Stress is defined as the force acting on a unit area. The fractional deformation it causes is called strain. The stress–strain relationship describes the mechanical behaviour of a material. When subjected to a low stress, materials deform in an elastic manner so that stress and strain are proportional to each other and the material returns to its original unstrained condition when the stress is removed. Seismic waves usually propagate under conditions of low stress. If the stress is increased progressively, a material eventually reaches its elastic limit, beyond which it cannot return to its unstrained state. Further stress causes disproportionately large strain and permanent deformation. Eventually the stress causes the material to reach its breaking point, at which it ruptures. The relationship between stress and strain is an important aspect of seismology. Two types of elastic deformation—compressional and shear—are important in determining how seismic waves propagate in the Earth. Imagine a small block that is subject to a deforming stress perpendicular to one face of the block; this is called a normal stress. The block shortens in the direction it is squeezed, but it expands slightly in the perpendicular direction; when stretched, the opposite changes of shape occur. These reversible elastic changes depend on how the material responds to compression or tension. This property is described by a physical parameter called the bulk modulus. In a shear deformation, the stress acts parallel to the surface of the block, so that one edge moves parallel to the opposite edge, changing the shape but not the volume of the block. This elastic property is described by a parameter called the shear modulus. An earthquake causes normal and shear strains that result in four types of seismic wave. Each type of wave is described by two quantities: its wavelength and frequency. The wavelength is the distance between successive peaks of a vibration, and the frequency is the number of vibrations per second. Their product is the speed of the wave.”

“A seismic P-wave (also called a primary, compressional, or longitudinal wave) consists of a series of compressions and expansions caused by particles in the ground moving back and forward parallel to the direction in which the wave travels […] It is the fastest seismic wave and can pass through fluids, although with reduced speed. When it reaches the Earth’s surface, a P-wave usually causes nearly vertical motion, which is recorded by instruments and may be felt by people but usually does not result in severe damage. […] A seismic S-wave (i.e. secondary or shear wave) arises from shear deformation […] It travels by means of particle vibrations perpendicular to the direction of travel; for that reason it is also known as a transverse wave. The shear wave vibrations are further divided into components in the horizontal and vertical planes, labelled the SH- and SV-waves, respectively. […] an S-wave is slower than a P-wave, propagating about 58 per cent as fast […] Moreover, shear waves can only travel in a material that supports shear strain. This is the case for a solid object, in which the molecules have regular locations and intermolecular forces hold the object together. By contrast, a liquid (or gas) is made up of independent molecules that are not bonded to each other, and thus a fluid has no shear strength. For this reason S-waves cannot travel through a fluid. […] S-waves have components in both the horizontal and vertical planes, so when they reach the Earth’s surface they shake structures from side to side as well as up and down. They can have larger amplitudes than P-waves. Buildings are better able to resist up-and-down motion than side-to-side shaking, and as a result SH-waves can cause serious damage to structures. […] Surface waves spread out along the Earth’s surface around a point – called the epicentre – located vertically above the earthquake’s source […] Very deep earthquakes usually do not produce surface waves, but the surface waves caused by shallow earthquakes are very destructive. In contrast to seismic body waves, which can spread out in three dimensions through the Earth’s interior, the energy in a seismic surface wave is guided by the free surface. It is only able to spread out in two dimensions and is more concentrated. Consequently, surface waves have the largest amplitudes on the seismogram of a shallow earthquake […] and are responsible for the strongest ground motions and greatest damage. There are two types of surface wave. [Rayleigh waves & Love waves, US].”

“The number of earthquakes that occur globally each year falls off with increasing magnitude. Approximately 1.4 million earthquakes annually have magnitude 2 or larger; of these about 1,500 have magnitude 5 or larger. The number of very damaging earthquakes with magnitude above 7 varies from year to year but has averaged about 15-20 annually since 1900. On average, one earthquake per year has magnitude 8 or greater, although such large events occur at irregular intervals. A magnitude 9 earthquake may release more energy than the cumulative energy of all other earthquakes in the same year. […] Large earthquakes may be preceded by foreshocks, which are lesser events that occur shortly before and in the same region as the main shock. They indicate the build-up of stress that leads to the main rupture. Large earthquakes are also followed by smaller aftershocks on the same fault or near to it; their frequency decreases as time passes, following the main shock. Aftershocks may individually be large enough to have serious consequences for a damaged region, because they can cause already weakened structures to collapse. […] About 90 per cent of the world’s earthquakes and 75 per cent of its volcanoes occur in the circum-Pacific belt known as the ‘Ring of Fire‘. […] The relative motions of the tectonic plates at their margins, together with changes in the state of stress within the plates, are responsible for most of the world’s seismicity. Earthquakes occur much more rarely in the geographic interiors of the plates. However, large intraplate earthquakes do occur […] In 2001 an intraplate earthquake with magnitude 7.7 occurred on a previously unknown fault under Gujarat, India […], killing 20,000 people and destroying 400,000 homes. […] Earthquakes are a serious hazard for populations, their property, and the natural environment. Great effort has been invested in the effort to predict their occurrence, but as yet without general success. […] Scientists have made more progress in assessing the possible location of an earthquake than in predicting the time of its occurrence. Although a damaging event can occur whenever local stress in the crust exceeds the breaking point of underlying rocks, the active seismic belts where this is most likely to happen are narrow and well defined […]. Unfortunately many densely populated regions and great cities are located in some of the seismically most active regions.[…] it is not yet possible to forecast reliably where or when an earthquake will occur, or how large it is likely to be.”

Links:

Plate tectonics.
Geodesy.
Seismology. Seismometer.
Law of conservation of energy. Second law of thermodynamics (This book incidentally covers these topics in much more detail, and does it quite well – US).
Angular momentum.
Big Bang model. Formation and evolution of the Solar System (…I should probably mention here that I do believe Wikipedia covers these sorts of topics quite well).
Invariable plane. Ecliptic.
Newton’s law of universal gravitation.
Kepler’s laws of planetary motion.
Potential energy. Kinetic energy. Orbital eccentricity. Line of apsides. Axial tilt. Figure of the Earth. Nutation. Chandler wobble.
Torque. Precession.
Very-long-baseline interferometry.
Reflection seismology.
Geophone.
Seismic shadow zone. Ray tracing (physics).
Structure of the Earth. Core–mantle boundary. D” region. Mohorovičić discontinuity. Lithosphere. Asthenosphere. Mantle transition zone.
Peridotite. Olivine. Perovskite.
Seismic tomography.
Lithoprobe project.
Orogenic belt.
European Geotraverse ProjectEuropean Geotraverse Project.
Microseism. Seismic noise.
Elastic-rebound theory. Fault (geology).
Richter magnitude scale (…of note: “the Richter scale underestimates the size of very large earthquakes with magnitudes greater than about 8.5”). Seismic moment. Moment magnitude scale. Modified Mercalli intensity scale. European macroseismic scale.
Focal mechanism.
Transform fault. Euler pole. Triple junction.
Megathrust earthquake.
Alpine fault. East African Rift.

November 1, 2018 Posted by | Astronomy, Books, Geology, Physics | Leave a comment

Black Hole Magnetospheres

The lecturer says ‘ah’ and ‘ehm’ a lot, especially in the beginning (it gets much better later in the talk), but this is not a good reason for not watching the lecture. The last five minutes of the lecture after the wrap-up can safely be skipped without missing out on anything.

I’ve added some links related to the coverage below.

Astrophysical jet.
Magnetosphere.
The Optical Variability of the Quasar 3C 279: The Signature of a Decelerating Jet? (Böttcher & Principe, 2009).
The slope of the black-hole mass versus velocity dispersion correlation (Tremaine et al., 2002).
Radio-Loudness of Active Galactic Nuclei: Observational Facts and Theoretical Implications (Sikora, Stawarz & Lasota, 2007).
Jet Launching Structure Resolved Near the Supermassive Black Hole in M87 (Doeleman et al., 2012).
Event Horizon Telescope.
The effective acceleration of plasma outflow in the paraboloidal magnetic field (Beskin & Nokhrina, 2006).
Toroidal magnetic field.
Current sheet.
No-hair theorem.
Frame-dragging.
Alfvén velocity.
Lorentz factor.
Magnetic acceleration of ultrarelativistic jets in gamma-ray burst sources (Komissarov et al., 2009).
Asymptotic domination of cold relativistic MHD winds by kinetic energy flux (Begelman & Li, 1994).
Magnetic nozzle.
Mach cone.
Collimated beam.
Magnetohydrodynamic simulations of gamma-ray burst jets: Beyond the progenitor star (Tchekhovskoy, Narayan & McKinney, 2010).

October 31, 2018 Posted by | Astronomy, Lectures, Physics, Studies | Leave a comment

Oncology (II)

Here’s my first post in this series. Below some more quotes and links related to the book’s coverage.

Types of Pain
1. Nociceptive pain
a. Somatic pain: Cutaneous or musculoskeletal tissue (ie, bone, soft tissue metastases). Usually well-localized, increased w/use/movement.
b. Visceral pain: Compression, obstruction, infiltration, ischemia, stretching, or inflammation of solid & hollow viscera. Diffuse, nonfocal.
2. Neuropathic pain: Direct injury/dysfunction of peripheral or CNS tissues. Typically burning, radiating, may increase at rest or w/nerve stretching.
Pain emergencies: Pain crisis, spinal cord compression, fracture, bowel obstruction, severe mucositis, acute severe side effects of opioids (addiction crisis, delirium, respiratory depression), severe pain in imminently dying pt [patient, US]
Pain mgmt at the end of life is a moral obligation to alleviate pain & unnecessary suffering & is not euthanasia. (Vacco vs. Quill, U.S. Supreme Court, 1997)”

Nausea and Vomiting
Chemotherapy-induced N/V — 3 distinct types: Acute, delayed, & anticipatory. Acute begins 1–2 h after chemotherapy & peaks at 4–6 h, delayed begins at 24 h & peaks at 48–72 h, anticipatory is conditioned response to nausea a/w previous cycles of chemotherapy”

Constipation […] affects 50% of pts w/advanced CA; majority of pts being treated w/opioid analgesics, other contributors: malignant obstruction, ↓ PO/fluid intake, inactivity, anticholinergics, electrolyte derangement”

Fatigue
Prevalence/screening — occurs in up to 75% of all solid tumor pts & up to 99% of CA pts receiving multimodality Rx. Providers should screen for fatigue at initial visit, at dx [diagnosis, US] of advanced dz [disease] & w/each chemo visit; should assess for depression & insomnia w/new dx of fatigue (JCO 2008;23:3886) […] Several common 2° causes to eval & target include anemia (most common), thyroid or adrenal insufficiency, hypogonadism”

Delirium
*Definition — disturbances in level of consciousness, attention, cognition, and/or perception developing abruptly w/fluctuations over course of d *Clinical subtypes — hyperactive, hypoactive, & mixed […] *Maximize nonpharm intervention prior to pharmacology […] *Use of antipsychotics should be geared toward short-term use for acute sx [symptoms, US] *Benzodiazepines should only be initiated for delirium as an adjunct to antipsychotics in setting of agitation despite adequate antipsychotic dosing (JCO 2011;30:1206)”

Cancer Survivorship
Overview *W/improvement in dx & tx of CA, there are millions of CA survivors, & this number is increasing
*Pts experience the normal issues of aging, w/c are compounded by the long-term effects of CA & CA tx
*CA survivors are at ↑ risk of developing morbidity & illnesses at younger age than general population due to their CA tx […] ~312,570 male & ~396,080 female CA survivors <40 y of age (Cancer Treatment and Survivorship Facts and Figures 2016–2017, ACS) *Fertility is an important issue for survivors & there is considerable concern about the possibility of impairment (Human Reproduction Update 2009;15:587)”

“Pts undergoing cancer tx are at ↑ risk for infxn [infection, US] due to disease itself or its therapies. […] *Epidemiology: 10–50% of pts w/ solid tumors & >80% of pts with hematologic tumors *Source of infxn evident in only 20–30% of febrile episodes *If identified, common sites of infxn include blood, lungs, skin, & GI tract *Regardless of microbiologic diagnosis, Rx should be started within 2 h of fever onset which improves outcomes […] [Infections in the transplant host is the] Primary cause of death in 8% of auto-HCT & up to 20% of allo-HCT recipients” [here’s a relevant link, US].

Localized prostate cancer
*Epidemiology Incidence: ~180000, most common non-skin CA (2016: U.S. est.) (CA Cancer J Clin 2016:66:7) *Annual Mortality: ~26000, 2nd highest cause of cancer death in men (2016: U.S. est) […] Mortality benefit from screening asx [asymptomatic, US] men has not been definitively established, & individualized discussion of potential benefits & harms should occur before PSA testing is offered. […] Gleason grade reflects growth/differentiation pattern & ranges from 1–5, from most to least differentiated. […] Historical (pre-PSA) 15-y prostate CA mortality risk for conservatively managed (no surgery or RT) localized Gleason 6: 18–30%, Gleason 7: 42–70%, Gleason 8–10: 60–87% (JAMA 1998:280:975)”

Bladder cancer […] Most common malignancy of the urinary system, ~77000 Pts will be diagnosed in the US in 2016, ~16000 will die of their dz. […] Presenting sx: Painless hematuria (typically intermittent & gross), irritative voiding sx (frequency, urgency, dysuria), flank or suprapubic pain (symptomatic of locally advanced dz), constitutional sx (fatigue, wt loss, failure to thrive) usually symptomatic of met [metastatic, US] dz

Links:

WHO analgesia ladder. (But see also this – US).
Renal cell carcinoma (“~63000 new cases & ~1400 deaths in the USA in 2016 […] Median age dx 64, more prevalent in men”)
Germ cell tumour (“~8720 new cases of testicular CA in the US in 2016 […] GCT is the most common CA in men ages of 15 to 35 y/o”)
Non-small-cell lung carcinoma (“225K annual cases w/ 160K US deaths, #1 cause of cancer mortality; 70% stage III/IV *Cigarette smoking: 85% of all cases, ↑ w/ intensity & duration of smoking”)
Small-cell lung cancer. (“SCLC accounts for 13–15% of all lung CAs, seen almost exclusively in smokers, majority w/ extensive stage dz at dx (60–70%). Lambert–Eaton myasthenic syndrome (“Affects 3% of SCLC pts”).
Thymoma. Myasthenia gravis. Morvan’s syndrome. Masaoka-Koga Staging system.
Pleural mesothelioma (“Rare; ≅3000 new cases dx annually in US. Commonly develops in the 5th to 7th decade […] About 80% are a/w asbestos exposure. […] Develops decades after asbestos exposure, averaging 30–40 years […] Median survival: 10 mo. […] Screening has not been shown to ↓ mortality even in subjects w/ asbestos exposure”)
Hepatocellular Carcinoma (HCC). (“*6th most common CA worldwide (626,000/y) & 2nd leading cause of worldwide CA mortality (598,000/y) *>80% cases of HCC occur in sub-Saharan Africa, eastern & southeastern Asia, & parts of Oceania including Papua New Guinea *9th leading cause of CA mortality in US […] Viral hepatitis: HBV & HCV are the leading RFs for HCC & accounts for 75% cases worldwide […] While HCV is now the leading cause of HCC in the US, NASH is expected to become a risk factor of increasing importance in the next decade”). Milan criteria.
CholangiocarcinomaKlatskin tumor. Gallbladder cancer. Courvoisier’s sign.
Pancreatic cancer (Incidence: estimated ~53,070 new cases/y & ~42,780 D/y in US (NCI SEER); 4th most common cause of CA death in US men & women; estimated to be 2nd leading cause of CA-related mortality by 2020″). Trousseau sign of malignancy. Whipple procedure.

October 21, 2018 Posted by | Books, Cancer/oncology, Gastroenterology, Medicine, Nephrology, Neurology, Psychiatry | Leave a comment

Perception (I)

Here’s my short goodreads review of the book. In this post I’ll include some observations and links related to the first half of the book’s coverage.

“Since the 1960s, there have been many attempts to model the perceptual processes using computer algorithms, and the most influential figure of the last forty years has been David Marr, working at MIT. […] Marr and his colleagues were responsible for developing detailed algorithms for extracting (i) low-level information about the location of contours in the visual image, (ii) the motion of those contours, and (iii) the 3-D structure of objects in the world from binocular disparities and optic flow. In addition, one of his lasting achievements was to encourage researchers to be more rigorous in the way that perceptual tasks are described, analysed, and formulated and to use computer models to test the predictions of those models against human performance. […] Over the past fifteen years, many researchers in the field of perception have characterized perception as a Bayesian process […] According to Bayesian theory, what we perceive is a consequence of probabilistic processes that depend on the likelihood of certain events occurring in the particular world we live in. Moreover, most Bayesian models of perceptual processes assume that there is noise in the sensory signals and the amount of noise affects the reliability of those signals – the more noise, the less reliable the signal. Over the past fifteen years, Bayes theory has been used extensively to model the interaction between different discrepant cues, such as binocular disparity and texture gradients to specify the slant of an inclined surface.”

“All surfaces have the property of reflectance — that is, the extent to which they reflect (rather than absorb) the incident illumination — and those reflectances can vary between 0 per cent and 100 per cent. Surfaces can also be selective in the particular wavelengths they reflect or absorb. Our colour vision depends on these selective reflectance properties […]. Reflectance characteristics describe the physical properties of surfaces. The lightness of a surface refers to a perceptual judgement of a surface’s reflectance characteristic — whether it appears as black or white or some grey level in between. Note that we are talking about the perception of lightness — rather than brightness — which refers to our estimate of how much light is coming from a particular surface or is emitted by a source of illumination. The perception of surface lightness is one of the most fundamental perceptual abilities because it allows us not only to differentiate one surface from another but also to identify the real-world properties of a particular surface. Many textbooks start with the observation that lightness perception is a difficult task because the amount of light reflected from a particular surface depends on both the reflectance characteristic of the surface and the intensity of the incident illumination. For example, a piece of black paper under high illumination will reflect back more light to the eye than a piece of white paper under dim illumination. As a consequence, lightness constancy — the ability to correctly judge the lightness of a surface under different illumination conditions — is often considered to be an ‘achievement’ of the perceptual system. […] The alternative starting point for understanding lightness perception is to ask whether there is something that remains constant or invariant in the patterns of light reaching the eye with changes of illumination. In this case, it is the relative amount of light reflected off different surfaces. Consider two surfaces that have different reflectances—two shades of grey. The actual amount of light reflected off each of the surfaces will vary with changes in the illumination but the relative amount of light reflected off the two surfaces remains the same. This shows that lightness perception is necessarily a spatial task and hence a task that cannot be solved by considering one particular surface alone. Note that the relative amount of light reflected off different surfaces does not tell us about the absolute lightnesses of different surfaces—only their relative lightnesses […] Can our perception of lightness be fooled? Yes, of course it can and the ways in which we make mistakes in our perception of the lightnesses of surfaces can tell us much about the characteristics of the underlying processes.”

“From a survival point of view, the ability to differentiate objects and surfaces in the world by their ‘colours’ (spectral reflectance characteristics) can be extremely useful […] Most species of mammals, birds, fish, and insects possess several different types of receptor, each of which has a a different spectral sensitivity function […] having two types of receptor with different spectral sensitivities is the minimum necessary for colour vision. This is referred to as dicromacy and the majority of mammals are dichromats with the exception of the old world monkeys and humans. […] The only difference between lightness and colour perception is that in the latter case we have to consider the way a surface selectively reflects (and absorbs) different wavelengths, rather than just a surface’s average reflectance over all wavelengths. […] The similarities between the tasks of extracting lightness and colour information mean that we can ask a similar question about colour perception [as we did about lightness perception] – what is the invariant information that could specify the reflectance characteristic of a surface? […] The information that is invariant under changes of spectral illumination is the relative amounts of long, medium, and short wavelength light reaching our eyes from different surfaces in the scene. […] the successful identification and discrimination of coloured surfaces is dependent on making spatial comparisons between the amounts of short, medium, and long wavelength light reaching our eyes from different surfaces. As with lightness perception, colour perception is necessarily a spatial task. It follows that if a scene is illuminated by the light of just a single wavelength, the appropriate spatial comparisons cannot be made. This can be demonstrated by illuminating a real-world scene containing many different coloured objects with yellow, sodium light that contains only a single wavelength. All objects, whatever their ‘colours’, will only reflect back to the eye different intensities of that sodium light and hence there will only be absolute but no relative differences between the short, medium, and long wavelength lightness records. There is a similar, but less dramatic, effect on our perception of colour when the spectral characteristics of the illumination are restricted to just a few wavelengths, as is the case with fluorescent lighting.”

“Consider a single receptor mechanism, such as a rod receptor in the human visual system, that responds to a limited range of wavelengths—referred to as the receptor’s spectral sensitivity function […]. This hypothetical receptor is more sensitive to some wavelengths (around 550 nm) than others and we might be tempted to think that a single type of receptor could provide information about the wavelength of the light reaching the receptor. This is not the case, however, because an increase or decrease in the response of that receptor could be due to either a change in the wavelength or an increase or decrease in the amount of light reaching the receptor. In other words, the output of a given receptor or receptor type perfectly confounds changes in wavelength with changes in intensity because it has only one way of responding — that is, more or less. This is Rushton’s Principle of Univariance — there is only one way of varying or one degree of freedom. […] On the other hand, if we consider a visual system with two different receptor types, one more sensitive to longer wavelengths (L) and the other more sensitive to shorter wavelengths (S), there are two degrees of freedom in the system and thus the possibility of signalling our two independent variables — wavelength and intensity […] it is quite possible to have a colour visual system that is based on just two receptor types. Such a colour visual system is referred to as dichromatic.”

“So why is the human visual system trichromatic? The answer can be found in a phenomenon known as metamerism. So far, we have restricted our discussion to the effect of a single wavelength on our dichromatic visual system: for example, a single wavelength of around 550 nm that stimulated both the long and short receptor types about equally […]. But what would happen if we stimulated our dichromatic system with light of two different wavelengths at the same time — one long wavelength and one short wavelength? With a suitable choice of wavelengths, this combination of wavelengths would also have the effect of stimulating the two receptor types about equally […] As a consequence, the output of the system […] with this particular mixture of wavelengths would be indistinguishable from that created by the single wavelength of 550 nm. These two indistinguishable stimulus situations are referred to as metamers and a little thought shows that there would be many thousands of combinations of wavelength that produce the same activity […] in a dichromatic visual system. As a consequence, all these different combinations of wavelengths would be indistinguishable to a dichromatic observer, even though they were produced by very different combinations of wavelengths. […] Is there any way of avoiding the problem of metamerism? The answer is no but we can make things better. If a visual system had three receptor types rather than two, then many of the combinations of wavelengths that produce an identical pattern of activity in two of the mechanisms (L and S) would create a different amount of activity in our third receptor type (M) that is maximally sensitive to medium wavelengths. Hence the number of indistinguishable metameric matches would be significantly reduced but they would never be eliminated. Using the same logic, it follows that a further increase in the number of receptor types (beyond three) would reduce the problem of metamerism even more […]. There would, however, also be a cost. Having more distinct receptor types in a finite-sized retina would increase the average spacing between the receptors of the same type and thus make our acuity for fine detail significantly poorer. There are many species, such as dragonflies, with more than three receptor types in their eyes but the larger number of receptor types typically serves to increase the range of wavelengths to which the animal is sensitive into the infra-red or ultra-violet parts of the spectrum, rather than to reduce the number of metamers. […] the sensitivity of the short wavelength receptors in the human eye only extends to ~540 nm — the S receptors are insensitive to longer wavelengths. This means that human colour vision is effectively dichromatic for combinations of wavelengths above 540 nm. In addition, there are no short wavelength cones in the central fovea of the human retina, which means that we are also dichromatic in the central part of our visual field. The fact that we are unaware of this lack of colour vision is probably due to the fact that our eyes are constantly moving. […] It is […] important to appreciate that the description of the human colour visual system as trichromatic is not a description of the number of different receptor types in the retina – it is a property of the whole visual system.”

“Recent research has shown that although the majority of humans are trichromatic there can be significant differences in the precise matches that individuals make when matching colour patches […] the absence of one receptor type will result in a greater number of colour confusions than normal and this does have a significant effect on an observer’s colour vision. Protanopia is the absence of long wavelength receptors, deuteranopia the absence of medium wavelength receptors, and tritanopia the absence of short wavelength receptors. These three conditions are often described as ‘colour blindness’ but this is a misnomer. We are all colour blind to some extent because we all suffer from colour metamerism and fail to make discriminations that would be very apparent to any biological or machine vision system with a greater number of receptor types. For example, most stomatopod crustaceans (mantis shrimps) have twelve different visual pigments and they also have the ability to detect both linear and circularly polarized light. What I find interesting is that we believe, as trichromats, that we have the ability to discriminate all the possible shades of colour (reflectance characteristics) that exist in our world. […] we are typically unaware of the limitations of our visual systems because we have no way of comparing what we see normally with what would be seen by a ‘better’ visual system.”

“We take it for granted that we are able to segregate the visual input into separate objects and distinguish objects from their backgrounds and we rarely make mistakes except under impoverished conditions. How is this possible? In many cases, the boundaries of objects are defined by changes of luminance and colour and these changes allow us to separate or segregate an object from its background. But luminance and colour changes are also present in the textured surfaces of many objects and therefore we need to ask how it is that our visual system does not mistake these luminance and colour changes for the boundaries of objects. One answer is that object boundaries have special characteristics. In our world, most objects and surfaces are opaque and hence they occlude (cover) the surface of the background. As a consequence, the contours of the background surface typically end—they are ‘terminated’—at the boundary of the occluding object or surface. Quite often, the occluded contours of the background are also revealed at the opposite side of the occluding surface because they are physically continuous. […] The impression of occlusion is enhanced if the occluded contours contain a range of different lengths, widths, and orientations. In the natural world, many animals use colour and texture to camouflage their boundaries as well as to fool potential predators about their identity. […] There is an additional source of information — relative motion — that can be used to segregate a visual scene into objects and their backgrounds and to break any camouflage that might exist in a static view. A moving, opaque object will progressively occlude and dis-occlude (reveal) the background surface so that even a well-camouflaged, moving animal will give away its location. Hence it is not surprising that a very common and successful strategy of many animals is to freeze in order not to be seen. Unless the predator has a sophisticated visual system to break the pattern or colour camouflage, the prey will remain invisible.”

Some links:

Perception.
Ames room. Inverse problem in optics.
Hermann von Helmholtz. Richard Gregory. Irvin Rock. James Gibson. David Marr. Ewald Hering.
Optical flow.
La dioptrique.
Necker cube. Rubin’s vase.
Perceptual constancy. Texture gradient.
Ambient optic array.
Affordance.
Luminance.
Checker shadow illusion.
Shape from shading/Photometric stereo.
Colour vision. Colour constancy. Retinex model.
Cognitive neuroscience of visual object recognition.
Motion perception.
Horace Barlow. Bernhard Hassenstein. Werner E. Reichardt. Sigmund Exner. Jan Evangelista Purkyně.
Phi phenomenon.
Motion aftereffect.
Induced motion.

October 14, 2018 Posted by | Biology, Books, Ophthalmology, Physics, Psychology | Leave a comment

Oncology (I)

I really disliked the ‘Pocket…’ part of this book, so I’ll sort of pretend to overlook this aspect also in my coverage of the book here. This’ll be a hard thing to do, given the way the book is written – I refer to my goodreads review for details, I’ll include only one illustrative quote from that review here:

“In terms of content, the book probably compares favourably with many significantly longer oncology texts (mainly, but certainly not only, because of the publication date). In terms of readability it compares unfavourably to an Egyptian translation of Alan Sokal’s 1996 article in Social Text, if it were translated by a 12-year old dyslexic girl.”

I don’t yet know in how much detail I’ll blog the book; this may end up being the only post about the book, or I may decide to post a longer sequence of posts. The book is hard to blog, which is an argument against covering it in detail – and also the reason why I haven’t already blogged it – but some of the content included in the book is really, really nice stuff to know, which is a strong argument in favour of covering at least some of the material here. The book has a lot of stuff, so regardless of the level of detail of my future coverage a lot of interesting stuff will of necessity have been left out.

My coverage below includes some observations and links related to the first 100 pages of the book.

“Understanding Radiation Response: The 4 Rs of Radiobiology
Repair of sublethal damage
Reassortment of cells w/in the cell cycle
Repopulation of cells during the course of radiotherapy
Reoxygenation of hypoxic cells […]

*Oxygen enhances DNA damage induced by free radicals, thereby facilitating the indirect action of IR [ionizing radiation, US] *Biologically equivalent dose can vary by a factor of 2–3 depending upon the presence or absence of oxygen (referred to as the oxygen enhancement ratio) *Poorly oxygenated postoperative beds frequently require higher doses of RT than preoperative RT [radiation therapy] […] Chemotherapy is frequently used sequentially or concurrently w/radiotherapy to maximize therapeutic benefit. This has improved pt outcomes although also a/w ↑ overall tox. […] [Many chemotherapeutic agents] show significant synergy with RT […] Mechanisms for synergy vary widely: Include cell cycle effects, hypoxic cell sensitization, & modulation of the DNA damage response”.

“Specific dose–volume relationships have been linked to the risk of late organ tox. […] *Dose, volume, underlying genetics, and age of the pt at the time of RT are critical determinants of the risk for 2° malignancy *The likelihood of 2° CA is correlated w/dose, but there is no threshold dose below which there is no additional risk of 2° malignancy *Latent period for radiation-induced solid tumors is generally between 10 and 60 y […]. Latent period for leukemias […] is shorter — peak between 5 & 7 y.”

“The immune system plays an important role in CA surveillance; Rx’s that modulate & amplify the immune system are referred to as immunotherapies […] tumors escape the immune system via loss of molecules on tumor cells important for immune activation […]; tumors can secrete immunosuppressing cytokines (IL-10 & TGF-β) & downregulate IFN-γ; in addition, tumors often express nonmutated self-Ag, w/c the immune system will, by definition, not react against; tumors can express molecules that inhibit T-cell function […] Ubiquitous CD47 (Don’t eat me signal) with ↑ expression on tumor cells mediates escape from phagocytosis. *Tumor microenvironment — immune cells are found in tumors, the exact composition of these cells has been a/w [associated with, US] pt outcomes; eg, high concentration of tumor-infiltrating lymphocytes (CD8+ cells) are a/w better outcomes & ↑ response to chemotherapy, Tregs & myeloid-derived suppressor cells are a/w worse outcomes, the exact role of Th17 in tumors is still being elucidated; the milieu of cytokines & chemokines also plays a role in outcome; some cytokines (VEGF, IL-1, IL-8) lead to endothelial cell proliferation, migration, & activation […] Expression of PD-L1 in tumor microenvironment can be indicator of improved likelihood of response to immune checkpoint blockade. […] Tumor mutational load correlates w/increased response to immunotherapy (NEJM; 2014;371:2189.).”

“Over 200 hereditary CA susceptibility syndromes, most are rare […]. Inherited CAs arise from highly penetrant germline mts [mutations, US]; “familial” CAss may be caused by interaction of low-penetrance genes, gene–environment interactions, or both. […] Genetic testing should be done based on individual’s probability of being a mt carrier & after careful discussion & informed consent”.

Pharmacogenetics: Effect of heritable genes on response to drugs. Study of single genes & interindividual differences in drug metabolizing enzymes. Pharmacogenomics: Effect of inherited & acquired genetic variation on drug response. Study of the functions & interactions of all genes in the genome & how the overall variability of drug response may be used to predict the right tx in individual pts & to design new drugs. Polymorphisms: Common variations in a DNA sequence that may lead to ↓ or ↑ activity of the encoded gene (SNP, micro- & minisatellites). SNPs: Single nucleotide polymorphisms that may cause an amino acid exchange in the encoded protein, account for >90% of genetic variation in the human genome.”

Tumor lysis syndrome [TLS] is an oncologic emergency caused by electrolyte abnormalities a/w spontaneous and/or tx-induced cell death that can be potentially fatal. […] 4 key electrolyte abnormalities 2° to excessive tumor/cell lysis: *Hyperkalemia *Hyperphosphatemia *Hypocalcemia *Hyperuricemia (2° to catabolism of nucleic acids) […] Common Malignancies Associated with a High Risk of Developing TLS in Adult Patients [include] *Acute leukemias [and] *High-grade lymphomas such as Burkitt lymphoma & DLBCL […] [Disease] characteristics a/w TLS risk: Rapidly progressive, chemosensitive, myelo- or lymphoproliferative [disease] […] [Patient] characteristics a/w TLS risk: *Baseline impaired renal function, oliguria, exposure to nephrotoxins, hyperuricemia *Volume depletion/inadequate hydration, acidic urine”.

Hypercalcemia [affects] ~10–30% of all pts w/malignancy […] Symptoms: Polyuria/polydipsia, intravascular volume depletion, AKI, lethargy, AMS [Altered Mental Status, US], rarely coma/seizures; N/V [nausea/vomiting, US] […] Osteolytic Bone Lesions [are seen in] ~20% of all hyperCa of malignancy […] [Treat] underlying malignancy, only way to effectively treat, all other tx are temporizing”.

“National Consensus Project definition: Palliative care means patient and family-centered care that optimizes quality of life by anticipating, preventing, and treating suffering. Palliative care throughout the continuum of illness involves addressing physical, intellectual, emotional, social, and spiritual needs to facilitate patient autonomy, access to information, and choice.” […] *Several RCTs have supported the integration of palliative care w/oncologic care, but specific interventions & models of care have varied. Expert panels at NCCN & ASCO recently reviewed the data to release evidence-based guidelines. *NCCN guidelines (2016): “Palliative care should be initiated by the primary oncology team and then augmented by collaboration with an interdisciplinary team of palliative care experts… All cancer patients should be screened for palliative care needs at their initial visit, at appropriate intervals, and as clinically indicated.” *ASCO guideline update (2016): “Inpatients and outpatients with advanced cancer should receive dedicated palliative care services, early in the disease course, concurrent with active tx. Referral of patients to interdisciplinary palliative care teams is optimal […] Essential Components of Palliative Care (ASCO) *Rapport & relationship building w/pts & family caregivers *Symptom, distress, & functional status mgmt (eg, pain, dyspnea, fatigue, sleep disturbance, mood, nausea, or constipation) *Exploration of understanding & education about illness & prognosis *Clarification of tx goals *Assessment & support of coping needs (eg, provision of dignity therapy) *Assistance w/medical decision making *Coordination w/other care providers *Provision of referrals to other care providers as indicated […] Useful Communication Tips *Use open-ended questions to elicit pt concerns *Clarify how much information the pt would like to know […] Focus on what can be done (not just what can’t be done) […] Remove the phrase “do everything” from your medical vocabulary […] Redefine hope by supporting realistic & achievable goals […] make empathy explicit”.

Some links:

Radiation therapy.
Brachytherapy.
External beam radiotherapy.
Image-guided radiation therapy.
Stereotactic Radiosurgery.
Total body irradiation.
Cancer stem cell.
Cell cycle.
Carcinogenesis. Oncogene. Tumor suppressor gene. Principles of Cancer Therapy: Oncogene and Non-oncogene Addiction.
Cowden syndrome. Peutz–Jeghers syndrome. Familial Atypical Multiple Mole Melanoma Syndrome. Li–Fraumeni syndrome. Lynch syndrome. Turcot syndrome. Muir–Torre syndrome. Von Hippel–Lindau disease. Gorlin syndrome. Werner syndrome. Birt–Hogg–Dubé syndrome. Neurofibromatosis type I. -ll- type 2.
Knudson hypothesis.
DNA sequencing.
Cytogenetics.
Fluorescence in situ hybridization.
CAR T Cell therapy.
Antimetabolite. Alkylating antineoplastic agentAntimicrotubule agents/mitotic inhibitors. Chemotherapeutic agentsTopoisomerase inhibitorMonoclonal antibodiesBisphosphonatesProteasome inhibitors. [The book covers all of these agents, and others I for one reason or another decided not to include, in great detail, listing many different types of agents and including notes on dosing, pharmacokinetics & pharmacodynamics, associated adverse events and drug interactions etc. These parts of the book were very interesting, but they are impossible to blog – US).
Syndrome of inappropriate antidiuretic hormone secretion.
Acute lactic acidosis (“Often seen w/liver mets or rapidly dividing heme malignancies […] High mortality despite aggressive tx [treatment]”).
Superior vena cava syndrome.

October 12, 2018 Posted by | Biology, Books, Cancer/oncology, Genetics, Immunology, Medicine, Pharmacology | Leave a comment

Principles of memory (II)

I have added a few more quotes from the book below:

Watkins and Watkins (1975, p. 443) noted that cue overload is “emerging as a general principle of memory” and defined it as follows: “The efficiency of a functional retrieval cue in effecting recall of an item declines as the number of items it subsumes increases.” As an analogy, think of a person’s name as a cue. If you know only one person named Katherine, the name by itself is an excellent cue when asked how Katherine is doing. However, if you also know Cathryn, Catherine, and Kathryn, then it is less useful in specifying which person is the focus of the question. More formally, a number of studies have shown experimentally that memory performance systematically decreases as the number of items associated with a particular retrieval cue increases […] In many situations, a decrease in memory performance can be attributed to cue overload. This may not be the ultimate explanation, as cue overload itself needs an explanation, but it does serve to link a variety of otherwise disparate findings together.”

Memory, like all other cognitive processes, is inherently constructive. Information from encoding and cues from retrieval, as well as generic information, are all exploited to construct a response to a cue. Work in several areas has long established that people will use whatever information is available to help reconstruct or build up a coherent memory of a story or an event […]. However, although these strategies can lead to successful and accurate remembering in some circumstances, the same processes can lead to distortion or even confabulation in others […]. There are a great many studies demonstrating the constructive and reconstructive nature of memory, and the literature is quite well known. […] it is clear that recall of events is deeply influenced by a tendency to reconstruct them using whatever information is relevant and to repair holes or fill in the gaps that are present in memory with likely substitutes. […] Given that memory is a reconstructive process, it should not be surprising to find that there is a large literature showing that people have difficulty distinguishing between memories of events that happened and memories of events that did not happen […]. In a typical reality monitoring experiment […], subjects are shown pictures of common objects. Every so often, instead of a picture, the subjects are shown the name of an object and are asked to create a mental image of the object. The test involves presenting a list of object names, and the subject is asked to judge whether they saw the item (i.e., judge the memory as “real”) or whether they saw the name of the object and only imagined seeing it (i.e., judge the memory as “imagined”). People are more likely to judge imagined events as real than real events as imagined. The likelihood that a memory will be judged as real rather than imagined depends upon the vividness of the memory in terms of its sensory quality, detail, plausibility, and coherence […]. What this means is that there is not a firm line between memories for real and imagined events: if an imagined event has enough qualitative features of a real event it is likely to be judged as real.”

“One hallmark of reconstructive processes is that in many circumstances they aid in memory retrieval because they rely on regularities in the world. If we know what usually happens in a given circumstance, we can use that information to fill in gaps that may be present in our memory for that episode. This will lead to a facilitation effect in some cases but will lead to errors in cases in which the most probable response is not the correct one. However, if we take this standpoint, we must predict that the errors that are made when using reconstructive processes will not be random; in fact, they will display a bias toward the most likely event. This sort of mechanism has been demonstrated many times in studies of schema-based representations […], and language production errors […] but less so in immediate recall. […] Each time an event is recalled, the memory is slightly different. Because of the interaction between encoding and retrieval, and because of the variations that occur between two different retrieval attempts, the resulting memories will always differ, even if only slightly.”

In this chapter we discuss the idea that a task or a process can be a “pure” measure of memory, without contamination from other hypothetical memory stores or structures, and without contributions from other processes. Our impurity principle states that tasks and processes are not pure, and therefore one cannot separate out the contributions of different memory stores by using tasks thought to tap only one system; one cannot count on subjects using only one process for a particular task […]. Our principle follows from previous arguments articulated by Kolers and Roediger (1984) and Crowder (1993), among others, that because every event recruits slightly different encoding and retrieval processes, there is no such thing as “pure” memory. […] The fundamental issue is the extent to which one can determine the contribution of a particular memory system or structure or process to performance on a particular memory task. There are numerous ways of assessing memory, and many different ways of classifying tasks. […] For example, if you are given a word fragment and asked to complete it with the first word that pops in your head, you are free to try a variety of strategies. […] Very different types of processing can be used by subjects even when given the same type of test or cue. People will use any and all processes to help them answer a question.”

“A free recall test typically provides little environmental support. A list of items is presented, and the subject is asked to recall which items were on the list. […] The experimenter simply says, “Recall the words that were on the list,” […] A typical recognition test provides more environmental support. Although a comparable list of items might have been presented, and although the subject is asked again about memory for an item in context, the subject is provided with a more specific cue, and knows exactly how many items to respond to. Some tests, such as word fragment completion and general knowledge questions, offer more environmental support. These tests provide more targeted cues, and often the cues are unique […] One common processing distinction involves the aspects of the stimulus that are focused on or are salient at encoding and retrieval: Subjects can focus more on an item’s physical appearance (data driven processing) or on an item’s meaning (conceptually driven processing […]). In general, performance on tasks such as free recall that offer little environmental support is better if the rememberer uses conceptual rather than perceptual processing at encoding. Although there is perceptual information available at encoding, there is no perceptual information provided at test so data-driven processes tend not to be appropriate. Typical recognition and cued-recall tests provide more specific cues, and as such, data-driven processing becomes more appropriate, but these tasks still require the subject to discriminate which items were presented in a particular specific context; this is often better accomplished using conceptually driven processing. […] In addition to distinctions between data driven and conceptually driven processing, another common distinction is between an automatic retrieval process, which is usually referred to as familiarity, and a nonautomatic process, usually called recollection […]. Additional distinctions abound. Our point is that very different types of processing can be used by subjects on a particular task, and that tasks can differ from one another on a variety of different dimensions. In short, people can potentially use almost any combination of processes on any particular task.”

Immediate serial recall is basically synonymous with memory span. In one the first reviews of this topic, Blankenship (1938, p. 2) noted that “memory span refers to the ability of an individual to reproduce immediately, after one presentation, a series of discrete stimuli in their original order.”3 The primary use of memory span was not so much to measure the capacity of a short-term memory system, but rather as a measure of intellectual abilities […]. Early on, however, it was recognized that memory span, whatever it was, varied as function of a large number of variables […], and could even be increased substantially by practice […]. Nonetheless, memory span became increasingly seen as a measure of the capacity of a short-term memory system that was distinct from long-term memory. Generally, most individuals can recall about 7 ± 2 items (Miller, 1956) or the number of items that can be pronounced in about 2 s (Baddeley, 1986) without making any mistakes. Does immediate serial recall (or memory span) measure the capacity of short-term (or working) memory? The currently available evidence suggests that it does not. […] The main difficulty in attempting to construct a “pure” measure of immediate memory capacity is that […] the influence of previously acquired knowledge is impossible to avoid. There are numerous contributions of long-term knowledge not only to memory span and immediate serial recall […] but to other short-term tasks as well […] Our impurity principle predicts that when distinctions are made between types of processing (e.g., conceptually driven versus data driven; familiarity versus recollection; automatic versus conceptual; item specific versus relational), each of those individual processes will not be pure measures of memory.”

“Over the past 20 years great strides have been made in noninvasive techniques for measuring brain activity. In particular, PET and fMRI studies have allowed us to obtain an on-line glimpse into the hemodynamic changes that occur in the brain as stimuli are being processed, memorized, manipulated, and recalled. However, many of these studies rely on subtractive logic that explicitly assumes that (a) there are different brain areas (structures) subserving different cognitive processes and (b) we can subtract out background or baseline activity and determine which areas are responsible for performing a particular task (or process) by itself. There have been some serious challenges to these underlying assumptions […]. A basic assumption is that there is some baseline activation that is present all of the time and that the baseline is built upon by adding more activation. Thus, when the baseline is subtracted out, what is left is a relatively pure measure of the brain areas that are active in completing the higher-level task. One assumption of this method is that adding a second component to the task does not affect the simple task. However, this assumption does not always hold true. […] Even if the additive factors logic were correct, these studies often assume that a task is a pure measure of one process or another. […] Again, the point is that humans will utilize whatever resources they can recruit in order to perform a task. Individuals using different retrieval strategies (e.g., visualization, verbalization, lax or strict decision criteria, etc.) show very different patterns of brain activation even when performing the same memory task (Miller & Van Horn, 2007). This makes it extremely dangerous to assume that any task is made up of purely one process. Even though many researchers involved in neuroimaging do not make task purity assumptions, these examples “illustrate the widespread practice in functional neuroimaging of interpreting activations only in terms of the particular cognitive function being investigated (Cabeza et al., 2003, p. 390).” […] We do not mean to suggest that these studies have no value — they clearly do add to our knowledge of how cognitive functioning works — but, instead, would like to urge more caution in the interpretation of localization studies, which are sometimes taken as showing that an activated area is where some unique process takes place.”

October 6, 2018 Posted by | Biology, Books, Psychology | Leave a comment

Circadian Rhythms (II)

Below I have added some more observations from the book, as well as some links of interest.

“Most circadian clocks make use of a sun-based mechanism as the primary synchronizing (entraining) signal to lock the internal day to the astronomical day. For the better part of four billion years, dawn and dusk has been the main zeitgeber that allows entrainment. Circadian clocks are not exactly 24 hours. So to prevent daily patterns of activity and rest from drifting (freerunning) over time, light acts rather like the winder on a mechanical watch. If the clock is a few minutes fast or slow, turning the winder sets the clock back to the correct time. Although light is the critical zeitgeber for much behaviour, and provides the overarching time signal for the circadian system of most organisms, it is important to stress that many, if not all cells within an organism possess the capacity to generate a circadian rhythm, and that these independent oscillators are regulated by a variety of different signals which, in turn, drive countless outputs […]. Colin Pittendrigh was one of the first to study entrainment, and what he found in Drosophila has been shown to be true across all organisms, including us. For example, if you keep Drosophila, or a mouse or bird, in constant darkness it will freerun. If you then expose the animal to a short pulse of light at different times the shifting (phase shifting) effects on the freerunning rhythm vary. Light pulses given when the clock ‘thinks’ it is daytime (subjective day) will have little effect on the clock. However, light falling during the first half of the subjective night causes the animal to delay the start of its activity the following day, while light exposure during the second half of the subjective night advances activity onset. Pittendrigh called this the ‘phase response curve’ […] Remarkably, the PRC of all organisms looks very similar, with light exposure around dusk and during the first half of the night causing a delay in activity the next day, while light during the second half of the night and around dawn generates an advance. The precise shape of the PRC varies between species. Some have large delays and small advances (typical of nocturnal species) while others have small delays and big advances (typical of diurnal species). Light at dawn and dusk pushes and pulls the freerunning rhythm towards an exactly 24-hour cycle. […] Light can act directly to modify behaviour. In nocturnal rodents such as mice, light encourages these animals to seek shelter, reduce activity, and even sleep, while in diurnal species light promotes alertness and vigilance. So circadian patterns of activity are not only entrained by dawn and dusk but also driven directly by light itself. This direct effect of light on activity has been called ‘masking’, and combines with the predictive action of the circadian system to restrict activity to that period of the light/dark cycle to which the organism has evolved and is optimally adapted.”

“[B]irds, reptiles, amphibians, and fish (but not mammals) have ‘extra-ocular’ photoreceptors located within the pineal complex, hypothalamus, and other areas of the brain, and like the invertebrates, eye loss in many cases has little impact upon the ability of these animals to entrain. […] Mammals are strikingly different from all other vertebrates as they possess photoreceptor cells only within their eyes. Eye loss in all groups of mammals […] abolishes the capacity of these animals to entrain their circadian rhytms to the light/dark cycle. But astonishingly, the visual cells of the retina – the rods and cones – are not required for the detection of the dawn/dusk signal. There exists a third class of photoreceptors within the eye […] Studies in the late 1990s by Russell Foster and his colleagues showed that mice lacking all their rod and cone photoreceptors could still regulate their circadian rhythms to light perfectly normally. But when their eyes were covered the ability to entrain was lost […] work on the rodless/coneless mouse, along with [other] studies […], clearly demonstrated that the mammalian retina contains a small population of photosensitive retinal ganglion cells or pRGCs, which comprise approximately 1-2 per cent of all retinal ganglion cells […] Ophthalmologists now appreciate that eye loss deprives us of both vision and a proper sense of time. Furthermore, genetic diseases that result in the loss of the rods and cones and cause visual blindness, often spare the pRGCs. Under these circumstances, individuals who have their eyes but are visually blind, yet possess functional pRGCs, need to be advised to seek out sufficient light to entrain their circadian system. The realization that the eye provides us with both our sense of space and our sense of time has redefined the diagnosis, treatment, and appreciation of human blindness.”

“But where is ‘the’ circadian clock of mammals? […] [Robert] Moore and [Irving] Zucker’s work pinpointed the SCN as the likely neural locus of the light-entrainable circadian pacemaker in mammals […] and a decade later this was confirmed by definitive experiments from Michael Menaker’s laboratory undertaken at the University of Virginia. […] These experiments established the SCN as the ‘master circadian pacemaker’ of mammals. […] There are around 20,000 or so neurons in the mouse SCN, but they are not identical. Some receive light information from the pRGCs and pass this information on to other SCN neurons, while others project to the thalamus and other regions of the brain, and collectively these neurons secrete more than one hundred different neurotransmitters, neuropeptides, cytokines, and growth factors. The SCN itself is composed of several regions or clusters of neurons, which have different jobs. Furthermore, there is considerable variability in the oscillations of the individual cells, ranging from 21.25 to 26.25 hours. Although the individual cells in the SCN have their own clockwork mechanisms with varying periods, the cell autonomous oscillations in neural activity are synchronized at the system level within the SCN, providing a coherent near 24-hour signal to the rest of the mammal. […] SCN neurons exhibit a circadian rhythm of spontaneous action potentials (SAPs), with higher frequency during the daytime than the night which in turn drives many rhythmic changes by alternating stimulatory and inhibitory inputs to the appropriate target neurons in the brain and neuroendocrine systems. […] The SCN projects directly to thirty-five brain regions, mostly located in the hypothalamus, and particularly those regions of the hypothalamus that regulate hormone release. Indeed, many pituitary hormones, such as cortisol, are under tight circadian control. Furthermore, the SCN regulates the activity of the autonomous nervous system, which in turn places multiple aspects of physiology, including the sensitivity of target tissues to hormonal signals, under circadian control. In addition to these direct neuronal connections, the SCN communicates to the rest of the body using diffusible chemical signals.”

“The SCN is the master clock in mammals but it is not the only clock. There are liver clocks, muscle clocks, pancreas clocks, adipose tissue clocks, and clocks of some sort in every organ and tissue examined to date. While lesioning of the SCN disrupts global behavioural rhythms such as locomotor activity, the disruption of clock function within just the liver or lung leads to circadian disorder that is confined to the target organ. In tissue culture, liver, heart, lung, skeletal muscle, and other organ tissues such as mammary glands express circadian rhythms, but these rhythms dampen and disappear after only a few cycles. This occurs because some individual clock cells lose rhythmicity, but more commonly because the individual cellular clocks become uncoupled from each other. The cells continue to tick, but all at different phases so that an overall 24-hour rhythm within the tissue or organ is lost. The discovery that virtually all cells of the body have clocks was one of the big surprises in circadian rhythms research. […] the SCN, entrained by pRGCs, acts as a pacemaker to coordinate, but not drive, the circadian activity of billions of individual peripheral circadian oscillators throughout the tissues and organs of the body. The signalling pathways used by the SCN to phase-entrain peripheral clocks are still uncertain, but we know that the SCN does not send out trillions of separate signals around the body that target specific cellular clocks. Rather there seems to be a limited number of neuronal and humoral signals which entrain peripheral clocks that in turn time their local physiology and gene expression.”

“As in Drosophilia […], the mouse clockwork also comprises three transcriptional-translational feedback loops with multiple interacting components. […] [T]he generation of a robust circadian rhythm that can be entrained by the environment is achieved via multiple elements, including the rate of transcription, translation, protein complex assembly, phosphorylation, other post-translation modification events, movement into the nucleus, transcriptional inhibition, and protein degradation. […] [A] complex arrangement is needed because from the moment a gene is switched on, transcription and translation usually takes two hours at most. As a result, substantial delays must be imposed at different stages to produce a near 24-hour oscillation. […] Although the molecular players may differ from Drosophilia and mice, and indeed even between different insects, the underlying principles apply across the spectrum of animal life. […] In fungi, plants, and cyanobacteria the clock genes are all different from each other and different again from the animal clock genes, suggesting that clocks evolved independently in the great evolutionary lineages of life on earth. Despite these differences, all these clocks are based upon a fundamental TTFL.”

“Circadian entrainment is surprisingly slow, taking several days to adjust to an advanced or delayed light/dark cycle. In most mammals, including jet-lagged humans, behavioural shifts are limited to approximately one hour (one time zone) per day. […] Changed levels of PER1 and PER2 act to shift the molecular clockwork, advancing the clock at dawn and delaying the clock at dusk. However, per mRNA and PER protein levels fall rapidly even if the animal remains exposed to light. As a result, the effects of light on the molecular clock are limited and entrainment is a gradual process requiring repeated shifting stimuli over multiple days. This phenomenon explains why we get jet lag: the clock cannot move immediately to a new dawn/dusk cycle because there is a ‘brake’ on the effects of light on the clock. […] The mechanism that provides this molecular brake is the production of SLK1 protein. […] Experiments on mice in which SLK1 has been suppressed show very rapid entrainment to simulated jet-lag.”

“We spend approximately 36 per cent of our entire lives asleep, and while asleep we do not eat, drink, or knowingly pass on our genes. This suggests that this aspect of our 24-hour behaviour provides us with something of huge value. If we are deprived of sleep, the sleep drive becomes so powerful that it can only be satisfied by sleep. […] Almost all life shows a 24-hour pattern of activity and rest, as we live on a planet that revolves once every 24 hours causing profound changes in light, temperature, and food availability. […] Life seems to have made an evolutionary ‘decision’ to be active at a specific part of the day/night cycle, and a species specialized to be active during the day will be far less effective at night. Conversely, nocturnal animals that are beautifully adapted to move around and hunt under dim or no light fail miserably during the day. […] no species can operate with the same effectiveness across the 24-hour light/dark environment. Species are adapted to a particular temporal niche just as they are to a physical niche. Activity at the wrong time often means death. […] Sleep may be the suspension of most physical activity, but a huge amount of essential physiology occurs during this time. Many diverse processes associated with the restoration and rebuilding of metabolic pathways are known to be up-regulated during sleep […] During sleep the body performs a broad range of essential ‘housekeeping’ functions without which performance and health during the active phase deteriorates rapidly. But these housekeeping functions would not be why sleep evolved in the first place. […] Evolution has allocated these key activities to the most appropriate time of day. […] In short, sleep has probably evolved as a species-specific response to a 24-hour world in which light, temperature, and food availability change dramatically. Sleep is a period of physical inactivity when individuals avoid movement within an environment to which they are poorly adapted, while using this time to undertake essential housekeeping functions demanded by their biology.”

“Sleep propensity in humans is closely correlated with the melatonin profile but this may be correlation and not causation. Indeed, individuals who do not produce melatonin (e.g. tetraplegic individuals, people on beta-blockers, or pinealectomized patients) still exhibit circadian sleep/wake rhythms with only very minor detectable changes. Another correlation between melatonin and sleep relates to levels of alertness. When melatonin is suppressed by light at night alertness levels increase, suggesting that melatonin and sleep propensity are directly connected. However, increases in alertness occur before a significant drop in blood melatonin. Furthermore, increased light during the day will also improve alertness when melatonin levels are already low. These findings suggest that melatonin is not a direct mediator of alertness and hence sleepiness. Taking synthetic melatonin or synthetic analogues of melatonin produces a mild sleepiness in about 70 per cent of people, especially when no natural melatonin is being released. The mechanism whereby melatonin produces mild sedation remains unclear.”

Links:

Teleost multiple tissue (tmt) opsin.
Melanopsin.
Suprachiasmatic nucleus.
Neuromedin S.
Food-entrainable circadian oscillators in the brain.
John Harrison. Seymour Benzer. Ronald Konopka. Jeffrey C. Hall. Michael Rosbash. Michael W. Young.
Circadian Oscillators: Around the Transcription-Translation Feedback Loop and on to Output.
Period (gene). Timeless (gene). CLOCK. Cycle (gene). Doubletime (gene). Cryptochrome. Vrille Gene.
Basic helix-loop-helix.
The clockwork orange Drosophila protein functions as both an activator and a repressor of clock gene expression.
RAR-related orphan receptor. RAR-related orphan receptor alpha.
BHLHE41.
The two-process model of sleep regulation: a reappraisal.

September 30, 2018 Posted by | Books, Genetics, Medicine, Molecular biology, Neurology, Ophthalmology | Leave a comment

Words

The words included in the post below were mostly words which I encountered while reading the books Personal Relationships, Circadian Rhythms, Quick Service, Principles of memory, Feet of Clay, The Reverse of the Medal, and The Letter of Marque.

Camouflet. Dissimulation. Nomological. Circumlocutory. EclosionPuissant. Esurient. Hisperic. Ambigram. Scotophilic. Millenarianism. Sonder. Pomology. Oogonium. Vole. Tippler. Autonoetic. Engraphy/engram. Armigerous. Gazunder/guzunder.

Frizzle. Matorral. SclerophyllXerophyte. Teratoma. Shallop. Quartan. Ablative. Prolative. Dispart. Ptarmigan. Starbolins. Idolatrous. Spoom. Cablet. Hostler. Chelonian. Omnium. Toper. Rectitude.

Marthambles. Combe. Holt. Stile. Plover. Andiron. Delf. Boreen. Thief-taker. Patten. Subvention. Hummum. Bustard. Lugger. Vainglory. Penetralia. Limicoline. Astragal. Fillebeg/filibeg. Voluptuous.

Civet. Moil. Impostume. Frowsty. Bob. Snuggery. Legation. Brindle. Epergne. Chough. Shoneen. Pilaff. Phaeton. Gentian. Poldavy. Grebe. Orotund. Panoply. Chiliad. Quiddity.

September 27, 2018 Posted by | Books, Language | Leave a comment

Principles of memory (I)

This book was interesting, but it was more interesting to me on account of the fact that it’s telling you a lot about what sort of memory research has taken place over the years, than it was interesting on account of the authors having presented a good model of how this stuff works. It’s the sort of book that makes you think.

I found the book challenging to blog, for a variety of reasons, but I’ve tried adding some observations of interest from the first four chapters of the coverage below.

“[I]n over 100 years of scientific research on memory, and nearly 50 years after the so-called cognitive revolution, we have nothing that really constitutes a widely accepted and frequently cited law of memory, and perhaps only one generally accepted principle.5 However, there are a plethora of effects, many of which have extensive literatures and hundreds of published empirical demonstrations. One reason for the lack of general laws and principles of memory might be that none exists. Tulving (1985a, p. 385), for example, has argued that “no profound generalizations can be made about memory as a whole,”because memory comprises many different systems and each system operates according to different principles. One can make “general statements about particular kinds of memory,” but one cannot make statements that would apply to all types of memory. […] Roediger (2008) also argues that no general principles of memory exist, but his reasoning and arguments are quite different. He reintroduces Jenkins’ (1979) tetrahedral model of memory, which views all memory experiments as comprising four factors: encoding conditions, retrieval conditions, subject variables, and events (materials and tasks). Using the tetrahedral model as a starting point, Roediger convincingly demonstrates that all of these variables can affect memory performance in different ways and that such complexity does not easily lend itself to a description using general principles. Because of the complexity of the interactions among these variables, Roediger suggests that “the most fundamental principle of learning and memory, perhaps its only sort of general law, is that in making any generalization about memory one must add that ‘it depends'” (p. 247). […]  Where we differ is that we think it possible to produce general principles of memory that take into account these factors. […] The purpose of this monograph is to propose seven principles of human memory that apply to all memory regardless of the type of information, type of processing, hypothetical system supporting memory, or time scale. Although these principles focus on the invariants and empirical regularities of memory, the reader should be forewarned that they are qualitative rather than quantitative, more like regularities in biology than principles of geometry. […] Few, if any, of our principles are novel, and the list is by no means complete. We certainly do not think that there are only seven principles of memory nor, when more principles are proposed, do we think that all seven of our principles will be among the most important.7″

“[T]he two most popular contemporary ways of looking at memory are the multiple systems view and the process (or proceduralist) view.1 Although these views are not always necessarily diametrically opposed […], their respective research programs are focused on different questions and search for different answers. The fundamental difference between structural and processing accounts of memory is whether different rules apply as a function of the way information is acquired, the type of material learned, and the time scale, or whether these can be explained using a single set of principles. […] Proponents of the systems view of memory suggest that memory is divided into multiple systems. Thus, their endeavor is focused on discovering and defining different systems and describing how they work. A “system” within this sort of framework is a structure that is anatomically and evolutionarily distinct from other memory systems and differs in its “methods of acquisition, representation and expression of knowledge” […] Using a variety of techniques, including neuropsychological and statistical methods, advocates of the multiple systems approach […] have identified five major memory systems: procedural memory, the perceptual representation system (PRS), semantic memory, primary or working memory, and episodic memory. […] In general, three criticisms are raised most often: The systems approach (a) has no criteria that produce exactly five different memory systems, (b) relies to a large extent on dissociations, and (c) has great difficulty accounting for the pattern of results observed at both ends of the life span. […] The multiple systems view […] lacks a principled and consistent set of criteria for delineating memory systems. Given the current state of affairs, it is not unthinkable to postulate 5 or 10 or 20 or even more different memory systems […]. Moreover, the specific memory systems that have been identified can be fractionated further, resulting in a situation in which the system is distributed in multiple brain locations, depending on the demands of the task at hand. […] The major strength of the systems view is usually taken to be its ability to account for data from amnesic patients […]. Those individuals seem to have specific deficits in episodic memory (recall and recognition) with very few, if any, deficits in semantic memory, procedural memory, or the PRS. […] [But on the other hand] age-related differences in memory do not follow the pattern predicted by the systems view.”

“From our point of view, asking where memory is “located” in the brain is like asking where running is located in the body. There are certainly parts of the body that are more important (the legs) or less important (the little fingers) in performing the task of running but, in the end, it is an activity that requires complex coordination among a great many body parts and muscle groups. To extend the analogy, looking for differences between memory systems is like looking for differences between running and walking. There certainly are many differences, but the main difference is that running requires more coordination among the different body parts and can be disrupted by small things (such as a corn on the toe) that may not interfere with walking at all. Are we to conclude, then, that running is located in the corn on your toe? […] although there is little doubt that more primitive functions such as low-level sensations can be organized in localized brain regions, it is likely that more complex cognitive functions, such as memory, are more accurately described by a dynamic coordination of distributed interconnected areas […]. This sort of approach implies that memory, per se, does not exist but, instead, “information … resides at the level of the large-scale network” (Bressler & Kelso, 2001, p. 33).”

“The processing view […] emphasizes encoding and retrieval processes instead of the system or location in which the memory might be stored. […] Processes, not structures, are what is fundamental. […] The major criticisms of the processing approaches parallel those that have been leveled at the systems view: (a) number of processes or components (instead of number of systems), (b) testability […], and (c) issues with a special population (amnesia rather than life span development). […] The major weakness of the processing view is the major strength of the systems view: patients diagnosed with amnesic syndrome. […] it is difficult to account for data showing a complete abolishment of episodic memory with no apparent effect on semantic memory, procedural memory, or the PRS without appealing to a separate memory store. […] We suggest that in the absence of a compelling reason to prefer the systems view over the processing view (or vice versa), it would be fruitful to consider memory from a functional perspective. We do not know how many memory systems there are or how to define what a memory system is. We do not know how many processes (or components of processing) there are or how to distinguish them. We do acknowledge that short-term memory and long-term memory seem to differ in some ways, as do episodic memory and semantic memory, but are they really fundamentally different? Both the systems approach and, to a lesser extent, the proceduralist approach emphasize differences. Our approach emphasizes similarities. We suggest that a search for general principles of memory, based on fundamental empirical regularities, can act as a spur to theory development and a reexamination of systems versus process theories of memory.”

Our first principle states that all memory is cue driven; without a cue, there can be no memory […]. By cue we mean a specific prompt or query, such as “Did you see this word on the previous list?” […] cues can also be nonverbal, such as odors […], emotions […], nonverbal sounds […], and images […], to name only a few. Although in many situations the person is fully aware that the cue is part of a memory test, this need not be the case. […] Computer simulation models of memory acknowledge the importance of cues by building them into the system; indeed, computer simulation models will not work unless there is a cue. In general, some input is provided to these models, and then a response is provided. The so-called global models of memory, SAM, TODAM, and MINERVA2, are all cue driven. […] it is hard to conceive of a computer model of memory that is not cue dependent, simply because the computer requires something to start the retrieval process. […] There is near unanimity in the view that memory is cue driven. The one area in which this view is contested concerns a particular form of memory that is characterized by highly restrictive capacity limitations.”

The most commonly cited principle of memory, according to [our] literature search […], is the encoding specificity principle […] Our version of this is called the encoding-retrieval principle [and] states that memory depends on the relation between the conditions at encoding and the conditions at retrieval. […] An appreciation for the importance of the encoding-retrieval interaction came about as the result of studies that examined the potency of various cues to elicit items from memory. A strong cue is a word that elicits a particular target word most of the time. For example, when most people hear the word bloom, the first word that pops into their head is often flower. A weak cue is a word that only rarely elicits a particular target. […] A reasonable prediction seems to be that strong cues should be better than weak cues for eliciting the correct item. However, this inference is not entirely correct because it fails to take into account the relationship between the encoding and retrieval conditions. […] the effectiveness of even a long-standing strong cue depends crucially on the processes that occurred at study and the cues available at test. This basic idea became the foundation of the transfer-appropriate processing framework. […] Taken literally, all that transfer-appropriate processing requires is that the processing done at encoding be appropriate given the processing that will be required at test; it permits processing that is identical and permits processing that is similar. It also, however, permits processing that is completely different as long as it is appropriate. […] many proponents of this view act as if the name were “transfer similar processing” and express the idea as requiring a “match” or “overlap” between study and test. […] However, just because increasing the match sometimes leads to enhanced retention does not mean that it is the match that is the critical variable. […] one can easily set up situations in which the degree of match is improved and memory retention is worse, or the degree of match is decreased and memory is better, or the degree of match is changed (either increased or decreased) and it has no effect on retention. Match, then, is simply not the critical variable in determining memory performance. […] The retrieval conditions include other possible responses, and these other items can affect performance. The most accurate description, then, is that it is the relation between encoding and retrieval that matters, not the degree of match or similarity. […] As Tulving (1983, p. 239) noted, the dynamic relation between encoding and retrieval conditions prohibits any statements that take the following forms:
1. “Items (events) of class X are easier to remember than items (events) of class Y.”
2. “Encoding operations of class X are more effective than encoding operations of class Y.”
3. “Retrieval cues of class X are more effective than retrieval cues of class Y.”
Absolute statements that do not specify both the encoding and the retrieval conditions are meaningless because an experimenter can easily
change some aspect of the encoding or retrieval conditions and greatly change the memory performance.”

In most areas of memory research, forgetting is seen as due to retrieval failure, often ascribed to some form of interference. There are, however, two areas of memory research that propose that forgetting is due to an intrinsic property of the memory trace, namely, decay. […] The two most common accounts view decay as either a mathematical convenience in a model, in which a parameter t is associated with time and leads to worse performance, or as some loss of information, in which it is unclear exactly what aspect of memory is decaying and what parts remain. In principle, a decay theory of memory could be proposed that is specific and testable, such as a process analogous to radioactive decay, in which it is understood precisely what is lost and what remains. Thus far, no such decay theory exists. Decay is posited as the forgetting mechanism in only two areas of memory research, sensory memory and short-term/working memory. […] One reason that time-based forgetting, such as decay, is often invoked is the common belief that short-term/working memory is immune to interference, especially proactive interference […]. This is simply not so. […] interference effects are readily observed in the short term. […] Decay predicts the same decrease for the same duration of distractor activity. Interference predicts differential effects depending on the presence or absence of interfering items. Numerous studies support the interference predictions and disconfirm predictions made on the basis of a decay view […] You might be tempted to say, yes, well, there are occasions in which the passage of time is either uncorrelated with or even negatively correlated with memory performance, but on average, you do worse with longer retention intervals. However, this confirms that the putative principle — the memorability of an event declines as the length of the storage interval increases — is not correct. […] One can make statements about the effects of absolute time, but only to the extent that one specifies both the conditions at encoding and those at retrieval. […] It is trivially easy to construct an experiment in which memory for an item does not change or even gets better the longer the retention interval. Here, we provide only eight examples, although there are numerous other examples; a more complete review and discussion are offered by Capaldi and Neath (1995) and Bjork (2001).”

September 22, 2018 Posted by | Books, Psychology | Leave a comment

Supermassive BHs Mergers

This is the first post I’ve posted in a while; as mentioned earlier the blogging hiatus was due to internet connectivity issues secondary to me moving. Those issues should now have been solved and I hope to soon get back to blogging regularly.

Some links related to the lecture’s coverage:

Supermassive black hole.
Binary black hole. Final parsec problem.
LIGO (Laser Interferometer Gravitational-Wave Observatory). Laser Interferometer Space Antenna (LISA).
Dynamical friction.
Science with the space-based interferometer eLISA: Supermassive black hole binaries (Klein et al., 2016).
Off the Beaten Path: A New Approach to Realistically Model The Orbital Decay of Supermassive Black Holes in Galaxy Formation Simulations (Tremmel et al., 2015).
Dancing to ChaNGa: A Self-Consistent Prediction For Close SMBH Pair Formation Timescales Following Galaxy Mergers (Tremmel et al., 2017).
Growth and activity of black holes in galaxy mergers with varying mass ratios (Capelo et al., 2015).
Tidal heating. Tidal stripping.
Nuclear coups: dynamics of black holes in galaxy mergers (Wassenhove et al., 2013).
The birth of a supermassive black hole binary (Pfister et al., 2017).
Massive black holes and gravitational waves (I assume this is the lecturer’s own notes for a similar talk held at another point in time – there’s a lot of overlap between these notes and stuff covered in the lecture, so if you’re curious you could go have a look. As far as I could see all figures in the second half of the link, as well as a few of the earlier ones, are figures which were also included in this lecture).

September 18, 2018 Posted by | Astronomy, Lectures, Physics, Studies | Leave a comment

Brief update

I recently moved, and it’s taking a lot longer than I’d have liked to get a new internet connection set up. I probably won’t blog much, if at all, in the next couple of weeks.

September 5, 2018 Posted by | Personal | Leave a comment

A few diabetes papers of interest

i. Islet Long Noncoding RNAs: A Playbook for Discovery and Characterization.

“This review will 1) highlight what is known about lncRNAs in the context of diabetes, 2) summarize the strategies used in lncRNA discovery pipelines, and 3) discuss future directions and the potential impact of studying the role of lncRNAs in diabetes.”

“Decades of mouse research and advances in genome-wide association studies have identified several genetic drivers of monogenic syndromes of β-cell dysfunction, as well as 113 distinct type 2 diabetes (T2D) susceptibility loci (1) and ∼60 loci associated with an increased risk of developing type 1 diabetes (T1D) (2). Interestingly, these studies discovered that most T1D and T2D susceptibility loci fall outside of coding regions, which suggests a role for noncoding elements in the development of disease (3,4). Several studies have demonstrated that many causal variants of diabetes are significantly enriched in regions containing islet enhancers, promoters, and transcription factor binding sites (5,6); however, not all diabetes susceptibility loci can be explained by associations with these regulatory regions. […] Advances in RNA sequencing (RNA-seq) technologies have revealed that mammalian genomes encode tens of thousands of RNA transcripts that have similar features to mRNAs, yet are not translated into proteins (7). […] detailed characterization of many of these transcripts has challenged the idea that the central role for RNA in a cell is to give rise to proteins. Instead, these RNA transcripts make up a class of molecules called noncoding RNAs (ncRNAs) that function either as “housekeeping” ncRNAs, such as transfer RNAs (tRNAs) and ribosomal RNAs (rRNAs), that are expressed ubiquitously and are required for protein synthesis or as “regulatory” ncRNAs that control gene expression. While the functional mechanisms of short regulatory ncRNAs, such as microRNAs (miRNAs), small interfering RNAs (siRNAs), and Piwi-interacting RNAs (piRNAs), have been described in detail (810), the most abundant and functionally enigmatic regulatory ncRNAs are called long noncoding RNAs (lncRNAs) that are loosely defined as RNAs larger than 200 nucleotides (nt) that do not encode for protein (1113). Although using a definition based strictly on size is somewhat arbitrary, this definition is useful both bioinformatically […] and technically […]. While the 200-nt size cutoff has simplified identification of lncRNAs, this rather broad classification means several features of lncRNAs, including abundance, cellular localization, stability, conservation, and function, are inherently heterogeneous (1517). Although this represents one of the major challenges of lncRNA biology, it also highlights the untapped potential of lncRNAs to provide a novel layer of gene regulation that influences islet physiology and pathophysiology.”

“Although the role of miRNAs in diabetes has been well established (9), analyses of lncRNAs in islets have lagged behind their short ncRNA counterparts. However, several recent studies provide evidence that lncRNAs are crucial components of the islet regulome and may have a role in diabetes (27). […] misexpression of several lncRNAs has been correlated with diabetes complications, such as diabetic nephropathy and retinopathy (2931). There are also preliminary studies suggesting that circulating lncRNAs, such as Gas5, MIAT1, and SENCR, may represent effective molecular biomarkers of diabetes and diabetes-related complications (32,33). Finally, several recent studies have explored the role of lncRNAs in the peripheral metabolic tissues that contribute to energy homeostasis […]. In addition to their potential as genetic drivers and/or biomarkers of diabetes and diabetes complications, lncRNAs can be exploited for the treatment of diabetes. For example, although tremendous efforts have been dedicated to generating replacement β-cells for individuals with diabetes (35,36), human pluripotent stem cell–based β-cell differentiation protocols remain inefficient, and the end product is still functionally and transcriptionally immature compared with primary human β-cells […]. This is largely due to our incomplete knowledge of in vivo differentiation regulatory pathways, which likely include a role for lncRNAs. […] Inherent characteristics of lncRNAs have also made them attractive candidates for drug targeting, which could be exploited for developing new diabetes therapies.”

“With the advancement of high-throughput sequencing techniques, the list of islet-specific lncRNAs is growing exponentially; however, functional characterization is missing for the majority of these lncRNAs. […] Tens of thousands of lncRNAs have been identified in different cell types and model organisms; however, their functions largely remain unknown. Although the tools for determining lncRNA function are technically restrictive, uncovering novel regulatory mechanisms will have the greatest impact on understanding islet function and identifying novel therapeutics for diabetes. To date, no biochemical assay has been used to directly determine the molecular mechanisms by which islet lncRNAs function, which highlights both the infancy of the field and the difficulty in implementing these techniques. […] Due to the infancy of the lncRNA field, most of the biochemical and genetic tools used to interrogate lncRNA function have only recently been developed or are adapted from techniques used to study protein-coding genes and we are only beginning to appreciate the limits and challenges of borrowing strategies from the protein-coding world.”

“The discovery of lncRNAs as a novel class of tissue-specific regulatory molecules has spawned an exciting new field of biology that will significantly impact our understanding of pancreas physiology and pathophysiology. As the field continues to grow, there is growing appreciation that lncRNAs will provide many of the missing components to existing molecular pathways that regulate islet biology and contribute to diabetes when they become dysfunctional. However, to date, most of the experimental emphasis on lncRNAs has focused on large-scale discovery using genome-wide approaches, and there remains a paucity of functional analysis.”

ii. Diabetes and Trajectories of Estimated Glomerular Filtration Rate: A Prospective Cohort Analysis of the Atherosclerosis Risk in Communities Study.

“Diabetes is among the strongest common risk factors for end-stage renal disease, and in industrialized countries, diabetes contributes to ∼50% of cases (3). Less is known about the pattern of kidney function decline associated with diabetes that precedes end-stage renal disease. Identifying patterns of estimated glomerular filtration rate (eGFR) decline could inform monitoring practices for people at high risk of chronic kidney disease (CKD) progression. A better understanding of when and in whom eGFR decline occurs would be useful for the design of clinical trials because eGFR decline >30% is now often used as a surrogate end point for CKD progression (4). Trajectories among persons with diabetes are of particular interest because of the possibility for early intervention and the prevention of CKD development. However, eGFR trajectories among persons with new diabetes may be complex due to the hypothesized period of hyperfiltration by which GFR increases, followed by progressive, rapid decline (5). Using data from the Atherosclerosis Risk in Communities (ARIC) study, an ongoing prospective community-based cohort of >15,000 participants initiated in 1987 with serial measurements of creatinine over 26 years, our aim was to characterize patterns of eGFR decline associated with diabetes, identify demographic, genetic, and modifiable risk factors within the population with diabetes that were associated with steeper eGFR decline, and assess for evidence of early hyperfiltration.”

“We categorized people into groups of no diabetes, undiagnosed diabetes, and diagnosed diabetes at baseline (visit 1) and compared baseline clinical characteristics using ANOVA for continuous variables and Pearson χ2 tests for categorical variables. […] To estimate individual eGFR slopes over time, we used linear mixed-effects models with random intercepts and random slopes. These models were fit on diabetes status at baseline as a nominal variable to adjust the baseline level of eGFR and included an interaction term between diabetes status at baseline and time to estimate annual decline in eGFR by diabetes categories. Linear mixed models were run unadjusted and adjusted, with the latter model including the following diabetes and kidney disease–related risk factors: age, sex, race–center, BMI, systolic blood pressure, hypertension medication use, HDL, prevalent coronary heart disease, annual family income, education status, and smoking status, as well as each variable interacted with time. Continuous covariates were centered at the analytic population mean. We tested model assumptions and considered different covariance structures, comparing nested models using Akaike information criteria. We identified the unstructured covariance model as the most optimal and conservative approach. From the mixed models, we described the overall mean annual decline by diabetes status at baseline and used the random effects to estimate best linear unbiased predictions to describe the distributions of yearly slopes in eGFR by diabetes status at baseline and displayed them using kernel density plots.”

“Because of substantial variation in annual eGFR slope among people with diagnosed diabetes, we sought to identify risk factors that were associated with faster decline. Among those with diagnosed diabetes, we compared unadjusted and adjusted mean annual decline in eGFR by race–APOL1 risk status (white, black– APOL1 low risk, and black–APOL1 high risk) [here’s a relevant link, US], systolic blood pressure […], smoking status […], prevalent coronary heart disease […], diabetes medication use […], HbA1c […], and 1,5-anhydroglucitol (≥10 and <10 μg/mL) [relevant link, US]. Because some of these variables were only available at visit 2, we required that participants included in this subgroup analysis attend both visits 1 and 2 and not be missing information on APOL1 or the variables assessed at visit 2 to ensure a consistent sample size. In addition to diabetes and kidney disease–related risk factors in the adjusted model, we also included diabetes medication use and HbA1c to account for diabetes severity in these analyses. […] to explore potential hyperfiltration, we used a linear spline model to allow the slope to change for each diabetes category between the first 3 years of follow-up (visit 1 to visit 2) and the subsequent time period (visit 2 to visit 5).”

“There were 15,517 participants included in the analysis: 13,698 (88%) without diabetes, 634 (4%) with undiagnosed diabetes, and 1,185 (8%) with diagnosed diabetes at baseline. […] At baseline, participants with undiagnosed and diagnosed diabetes were older, more likely to be black or have hypertension and coronary heart disease, and had higher mean BMI and lower mean HDL compared with those without diabetes […]. Income and education levels were also lower among those with undiagnosed and diagnosed diabetes compared with those without diabetes. […] Overall, there was a nearly linear association between eGFR and age over time, regardless of diabetes status […]. The crude mean annual decline in eGFR was slowest among those without diabetes at baseline (decline of −1.6 mL/min/1.73 m2/year [95% CI −1.6 to −1.5]), faster among those with undiagnosed diabetes compared with those without diabetes (decline of −2.1 mL/min/1.73 m2/year [95% CI −2.2 to −2.0][…]), and nearly twice as rapid among those with diagnosed diabetes compared with those without diabetes (decline of −2.9 mL/min/1.73 m2/year [95% CI −3.0 to −2.8][…]). Adjustment for diabetes and kidney disease–related risk factors attenuated the results slightly, but those with undiagnosed and diagnosed diabetes still had statistically significantly steeper declines than those without diabetes (decline among no diabetes −1.4 mL/min/1.73 m2/year [95% CI −1.5 to −1.4] and decline among undiagnosed diabetes −1.8 mL/min/1.73 m2/year [95% CI −2.0 to −1.7], difference vs. no diabetes of −0.4 mL/min/1.73 m2/year [95% CI −0.5 to −0.3; P < 0.001]; decline among diagnosed diabetes −2.5 mL/min/1.73 m2/year [95% CI −2.6 to −2.4], difference vs. no diabetes of −1.1 mL/min/1.73 m2/ year [95% CI −1.2 to −1.0; P < 0.001]). […] The decline in eGFR per year varied greatly across individuals, particularly among those with diabetes at baseline […] Among participants with diagnosed diabetes at baseline, those who were black, had systolic blood pressure ≥140 mmHg, used diabetes medications, had an HbA1c ≥7% [≥53 mmol/mol], or had 1,5-anhydroglucitol <10 μg/mL were at risk for steeper annual declines than their counterparts […]. Smoking status and prevalent coronary heart disease were not associated with significantly steeper eGFR decline in unadjusted analyses. Adjustment for risk factors, diabetes medication use, and HbA1c attenuated the differences in decline for all subgroups with the exception of smoking status, leaving black race along with APOL1-susceptible genotype, systolic blood pressure ≥140 mmHg, current smoking, insulin use, and HbA1c ≥9% [≥75 mmol/mol] as the risk factors indicative of steeper decline.”

CONCLUSIONS Diabetes is an important risk factor for kidney function decline. Those with diagnosed diabetes declined almost twice as rapidly as those without diabetes. Among people with diagnosed diabetes, steeper declines were seen in those with modifiable risk factors, including hypertension and glycemic control, suggesting areas for continued targeting in kidney disease prevention. […] Few other community-based studies have evaluated differences in kidney function decline by diabetes status over a long period through mid- and late life. One study of 10,184 Canadians aged ≥66 years with creatinine measured during outpatient visits showed results largely consistent with our findings but with much shorter follow-up (median of 2 years) (19). Other studies of eGFR change in a general population have found smaller declines than our results (20,21). A study conducted in Japanese participants aged 40–79 years found a decline of only −0.4 mL/min/1.73 m2/year over the course of two assessments 10 years apart (compared with our estimate among those without diabetes: −1.6 mL/min/1.73 m2/year). This is particularly interesting, as Japan is known to have a higher prevalence of CKD and end-stage renal disease than the U.S. (20). However, this study evaluated participants over a shorter time frame and required attendance at both assessments, which may have decreased the likelihood of capturing severe cases and resulted in underestimation of decline.”

“The Baltimore Longitudinal Study of Aging also assessed kidney function over time in a general population of 446 men, ranging in age from 22 to 97 years at baseline, each with up to 14 measurements of creatinine clearance assessed between 1958 and 1981 (21). They also found a smaller decline than we did (−0.8 mL/min/year), although this study also had notable differences. Their main analysis excluded participants with hypertension and history of renal disease or urinary tract infection and those treated with diuretics and/or antihypertensive medications. Without those exclusions, their overall estimate was −1.1 mL/min/year, which better reflects a community-based population and our results. […] In our evaluation of risk factors that might explain the variation in decline seen among those with diagnosed diabetes, we observed that black race, systolic blood pressure ≥140 mmHg, insulin use, and HbA1c ≥9% (≥75 mmol/mol) were particularly important. Although the APOL1 high-risk genotype is a known risk factor for eGFR decline, African Americans with low-risk APOL1 status continued to be at higher risk than whites even after adjustment for traditional risk factors, diabetes medication use, and HbA1c.”

“Our results are relevant to the design and conduct of clinical trials. Hard clinical outcomes like end-stage renal disease are relatively rare, and a 30–40% decline in eGFR is now accepted as a surrogate end point for CKD progression (4). We provide data on patient subgroups that may experience accelerated trajectories of kidney function decline, which has implications for estimating sample size and ensuring adequate power in future clinical trials. Our results also suggest that end points of eGFR decline might not be appropriate for patients with new-onset diabetes, in whom declines may actually be slower than among persons without diabetes. Slower eGFR decline among those with undiagnosed diabetes, who are likely early in the course of diabetes, is consistent with the hypothesis of hyperfiltration. Similar to other studies, we found that persons with undiagnosed diabetes had higher GFR at the outset, but this was a transient phenomenon, as they ultimately experienced larger declines in kidney function than those without diabetes over the course of follow-up (2325). Whether hyperfiltration is a universal aspect of early disease and, if not, whether it portends worse long-term outcomes is uncertain. Existing studies investigating hyperfiltration as a precursor to adverse kidney outcomes are inconsistent (24,26,27) and often confounded by diabetes severity factors like duration (27). We extended this literature by separating undiagnosed and diagnosed diabetes to help address that confounding.”

iii. Saturated Fat Is More Metabolically Harmful for the Human Liver Than Unsaturated Fat or Simple Sugars.

OBJECTIVE Nonalcoholic fatty liver disease (i.e., increased intrahepatic triglyceride [IHTG] content), predisposes to type 2 diabetes and cardiovascular disease. Adipose tissue lipolysis and hepatic de novo lipogenesis (DNL) are the main pathways contributing to IHTG. We hypothesized that dietary macronutrient composition influences the pathways, mediators, and magnitude of weight gain-induced changes in IHTG.

RESEARCH DESIGN AND METHODS We overfed 38 overweight subjects (age 48 ± 2 years, BMI 31 ± 1 kg/m2, liver fat 4.7 ± 0.9%) 1,000 extra kcal/day of saturated (SAT) or unsaturated (UNSAT) fat or simple sugars (CARB) for 3 weeks. We measured IHTG (1H-MRS), pathways contributing to IHTG (lipolysis ([2H5]glycerol) and DNL (2H2O) basally and during euglycemic hyperinsulinemia), insulin resistance, endotoxemia, plasma ceramides, and adipose tissue gene expression at 0 and 3 weeks.

RESULTS Overfeeding SAT increased IHTG more (+55%) than UNSAT (+15%, P < 0.05). CARB increased IHTG (+33%) by stimulating DNL (+98%). SAT significantly increased while UNSAT decreased lipolysis. SAT induced insulin resistance and endotoxemia and significantly increased multiple plasma ceramides. The diets had distinct effects on adipose tissue gene expression.”

CONCLUSIONS NAFLD has been shown to predict type 2 diabetes and cardiovascular disease in multiple studies, even independent of obesity (1), and also to increase the risk of progressive liver disease (17). It is therefore interesting to compare effects of different diets on liver fat content and understand the underlying mechanisms. We examined whether provision of excess calories as saturated (SAT) or unsaturated (UNSAT) fats or simple sugars (CARB) influences the metabolic response to overfeeding in overweight subjects. All overfeeding diets increased IHTGs. The SAT diet induced a greater increase in IHTGs than the UNSAT diet. The composition of the diet altered sources of excess IHTGs. The SAT diet increased lipolysis, whereas the CARB diet stimulated DNL. The SAT but not the other diets increased multiple plasma ceramides, which increase the risk of cardiovascular disease independent of LDL cholesterol (18). […] Consistent with current dietary recommendations (3638), the current study shows that saturated fat is the most harmful dietary constituent regarding IHTG accumulation.”

iv. Primum Non Nocere: Refocusing Our Attention on Severe Hypoglycemia Prevention.

“Severe hypoglycemia, defined as low blood glucose requiring assistance for recovery, is arguably the most dangerous complication of type 1 diabetes as it can result in permanent cognitive impairment, seizure, coma, accidents, and death (1,2). Since the Diabetes Control and Complications Trial (DCCT) demonstrated that intensive intervention to normalize glucose prevents long-term complications but at the price of a threefold increase in the rate of severe hypoglycemia (3), hypoglycemia has been recognized as the major limitation to achieving tight glycemic control. Severe hypoglycemia remains prevalent among adults with type 1 diabetes, ranging from ∼1.4% per year in the DCCT/EDIC (Epidemiology of Diabetes Interventions and Complications) follow-up cohort (4) to ∼8% in the T1D Exchange clinic registry (5).

One the greatest risk factors for severe hypoglycemia is impaired awareness of hypoglycemia (6), which increases risk up to sixfold (7,8). Hypoglycemia unawareness results from deficient counterregulation (9), where falling glucose fails to activate the autonomic nervous system to produce neuroglycopenic symptoms that normally help patients identify and respond to episodes (i.e., sweating, palpitations, hunger) (2). An estimated 20–25% of adults with type 1 diabetes have impaired hypoglycemia awareness (8), which increases to more than 50% after 25 years of disease duration (10).

Screening for hypoglycemia unawareness to identify patients at increased risk of severe hypoglycemic events should be part of routine diabetes care. Self-identified impairment in awareness tends to agree with clinical evaluation (11). Therefore, hypoglycemia unawareness can be easily and effectively screened […] Interventions for hypoglycemia unawareness include a range of behavioral and medical options. Avoiding hypoglycemia for at least several weeks may partially reverse hypoglycemia unawareness and reduce risk of future episodes (1). Therefore, patients with hypoglycemia and unawareness may be advised to raise their glycemic and HbA1c targets (1,2). Diabetes technology can play a role, including continuous subcutaneous insulin infusion (CSII) to optimize insulin delivery, continuous glucose monitoring (CGM) to give technological awareness in the absence of symptoms (14), or the combination of the two […] Aside from medical management, structured or hypoglycemia-specific education programs that aim to prevent hypoglycemia are recommended for all patients with severe hypoglycemia or hypoglycemia unawareness (14). In randomized trials, psychoeducational programs that incorporate increased education, identification of personal risk factors, and behavior change support have improved hypoglycemia unawareness and reduced the incidence of both nonsevere and severe hypoglycemia over short periods of follow-up (17,18) and extending up to 1 year (19).”

“Given that the presence of hypoglycemia unawareness increases the risk of severe hypoglycemia, which is the strongest predictor of a future episode (2,4), the implication that intervention can break the life-threatening and traumatizing cycle of hypoglycemia unawareness and severe hypoglycemia cannot be overstated. […] new evidence of durability of effect across treatment regimen without increasing the risk for long-term complications creates an imperative for action. In combination with existing screening tools and a body of literature investigating novel interventions for hypoglycemia unawareness, these results make the approach of screening, recognition, and intervention very compelling as not only a best practice but something that should be incorporated in universal guidelines on diabetes care, particularly for individuals with type 1 diabetes […] Hyperglycemia is […] only part of the puzzle in diabetes management. Long-term complications are decreasing across the population with improved interventions and their implementation (24). […] it is essential to shift our historical obsession with hyperglycemia and its long-term complications to equally emphasize the disabling, distressing, and potentially fatal near-term complication of our treatments, namely severe hypoglycemia. […] The health care providers’ first dictum is primum non nocere — above all, do no harm. ADA must refocus our attention on severe hypoglycemia as an iatrogenic and preventable complication of our interventions.”

v. Anti‐vascular endothelial growth factor combined with intravitreal steroids for diabetic macular oedema.

“Background

The combination of steroid and anti‐vascular endothelial growth factor (VEGF) intravitreal therapeutic agents could potentially have synergistic effects for treating diabetic macular oedema (DMO). On the one hand, if combined treatment is more effective than monotherapy, there would be significant implications for improving patient outcomes. Conversely, if there is no added benefit of combination therapy, then people could be potentially exposed to unnecessary local or systemic side effects.

Objectives

To assess the effects of intravitreal agents that block vascular endothelial growth factor activity (anti‐VEGF agents) plus intravitreal steroids versus monotherapy with macular laser, intravitreal steroids or intravitreal anti‐VEGF agents for managing DMO.”

“There were eight RCTs (703 participants, 817 eyes) that met our inclusion criteria with only three studies reporting outcomes at one year. The studies took place in Iran (3), USA (2), Brazil (1), Czech Republic (1) and South Korea (1). […] When comparing anti‐VEGF/steroid with anti‐VEGF monotherapy as primary therapy for DMO, we found no meaningful clinical difference in change in BCVA [best corrected visual acuity] […] or change in CMT [central macular thickness] […] at one year. […] There was very low‐certainty evidence on intraocular inflammation from 8 studies, with one event in the anti‐VEGF/steroid group (313 eyes) and two events in the anti‐VEGF group (322 eyes). There was a greater risk of raised IOP (Peto odds ratio (OR) 8.13, 95% CI 4.67 to 14.16; 635 eyes; 8 RCTs; moderate‐certainty evidence) and development of cataract (Peto OR 7.49, 95% CI 2.87 to 19.60; 635 eyes; 8 RCTs; moderate‐certainty evidence) in eyes receiving anti‐VEGF/steroid compared with anti‐VEGF monotherapy. There was low‐certainty evidence from one study of an increased risk of systemic adverse events in the anti‐VEGF/steroid group compared with the anti‐VEGF alone group (Peto OR 1.32, 95% CI 0.61 to 2.86; 103 eyes).”

“One study compared anti‐VEGF/steroid versus macular laser therapy. At one year investigators did not report a meaningful difference between the groups in change in BCVA […] or change in CMT […]. There was very low‐certainty evidence suggesting an increased risk of cataract in the anti‐VEGF/steroid group compared with the macular laser group (Peto OR 4.58, 95% 0.99 to 21.10, 100 eyes) and an increased risk of elevated IOP in the anti‐VEGF/steroid group compared with the macular laser group (Peto OR 9.49, 95% CI 2.86 to 31.51; 100 eyes).”

“Authors’ conclusions

Combination of intravitreal anti‐VEGF plus intravitreal steroids does not appear to offer additional visual benefit compared with monotherapy for DMO; at present the evidence for this is of low‐certainty. There was an increased rate of cataract development and raised intraocular pressure in eyes treated with anti‐VEGF plus steroid versus anti‐VEGF alone. Patients were exposed to potential side effects of both these agents without reported additional benefit.”

vi. Association between diabetic foot ulcer and diabetic retinopathy.

“More than 25 million people in the United States are estimated to have diabetes mellitus (DM), and 15–25% will develop a diabetic foot ulcer (DFU) during their lifetime [1]. DFU is one of the most serious and disabling complications of DM, resulting in significantly elevated morbidity and mortality. Vascular insufficiency and associated neuropathy are important predisposing factors for DFU, and DFU is the most common cause of non-traumatic foot amputation worldwide. Up to 70% of all lower leg amputations are performed on patients with DM, and up to 85% of all amputations are preceded by a DFU [2, 3]. Every year, approximately 2–3% of all diabetic patients develop a foot ulcer, and many require prolonged hospitalization for the treatment of ensuing complications such as infection and gangrene [4, 5].

Meanwhile, a number of studies have noted that diabetic retinopathy (DR) is associated with diabetic neuropathy and microvascular complications [610]. Despite the magnitude of the impact of DFUs and their consequences, little research has been performed to investigate the characteristics of patients with a DFU and DR. […] the aim of this study was to investigate the prevalence of DR in patients with a DFU and to elucidate the potential association between DR and DFUs.”

“A retrospective review was conducted on DFU patients who underwent ophthalmic and vascular examinations within 6 months; 100 type 2 diabetic patients with DFU were included. The medical records of 2496 type 2 diabetic patients without DFU served as control data. DR prevalence and severity were assessed in DFU patients. DFU patients were compared with the control group regarding each clinical variable. Additionally, DFU patients were divided into two groups according to DR severity and compared. […] Out of 100 DFU patients, 90 patients (90%) had DR and 55 (55%) had proliferative DR (PDR). There was no significant association between DR and DFU severities (R = 0.034, p = 0.734). A multivariable analysis comparing type 2 diabetic patients with and without DFUs showed that the presence of DR [OR, 226.12; 95% confidence interval (CI), 58.07–880.49; p < 0.001] and proliferative DR [OR, 306.27; 95% CI, 64.35–1457.80; p < 0.001), higher HbA1c (%, OR, 1.97, 95% CI, 1.46–2.67; p < 0.001), higher serum creatinine (mg/dL, OR, 1.62, 95% CI, 1.06–2.50; p = 0.027), older age (years, OR, 1.12; 95% CI, 1.06–1.17; p < 0.001), higher pulse pressure (mmHg, OR, 1.03; 95% CI, 1.00–1.06; p = 0.025), lower cholesterol (mg/dL, OR, 0.94; 95% CI, 0.92–0.97; p < 0.001), lower BMI (kg/m2, OR, 0.87, 95% CI, 0.75–1.00; p = 0.044) and lower hematocrit (%, OR, 0.80, 95% CI, 0.74–0.87; p < 0.001) were associated with DFUs. In a subgroup analysis of DFU patients, the PDR group had a longer duration of diabetes mellitus, higher serum BUN, and higher serum creatinine than the non-PDR group. In the multivariable analysis, only higher serum creatinine was associated with PDR in DFU patients (OR, 1.37; 95% CI, 1.05–1.78; p = 0.021).

Conclusions

Diabetic retinopathy is prevalent in patients with DFU and about half of DFU patients had PDR. No significant association was found in terms of the severity of these two diabetic complications. To prevent blindness, patients with DFU, and especially those with high serum creatinine, should undergo retinal examinations for timely PDR diagnosis and management.”

August 29, 2018 Posted by | Diabetes, Epidemiology, Genetics, Medicine, Molecular biology, Nephrology, Ophthalmology, Statistics, Studies | Leave a comment

Nephrology Board Review

Some links related to the lecture’s coverage:

Diabetic nephropathy.
Henoch–Schönlein purpura.
Leukocytoclastic Vasculitis.
Glomerulonephritis. Rapidly progressive glomerulonephritis.
Nephrosis.
Analgesic nephropathy.
Azotemia.
Allergic Interstitial Nephritis: Clinical Features and Pathogenesis.
Nonsteroidal anti-inflammatory drugs: effects on kidney function (Whelton & Hamilton, J Clin Pharmacol. 1991 Jul;31(7):588-98).
Goodpasture syndrome.
Creatinine. Limitations of serum creatinine as a marker of renal function.
Hyperkalemia.
U wave.
Nephrolithiasis. Calcium oxalate.
Calcium gluconate.
Bicarbonate.
Effect of various therapeutic approaches on plasma potassium and major regulating factors in terminal renal failure (Blumberg et al., 1988).
Effect of prolonged bicarbonate administration on plasma potassium in terminal renal failure (Blumberg et al., 1992).
Renal tubular acidosis.
Urine anion gap.
Metabolic acidosis.
Contrast-induced nephropathy.
Rhabdomyolysis.
Lipiduria. Urinary cast.
Membranous glomerulonephritis.
Postinfectious glomerulonephritis.

August 28, 2018 Posted by | Cardiology, Chemistry, Diabetes, Lectures, Medicine, Nephrology, Pharmacology, Studies | Leave a comment

Circadian Rhythms (I)

“Circadian rhythms are found in nearly every living thing on earth. They help organisms time their daily and seasonal activities so that they are synchronized to the external world and the predictable changes in the environment. These biological clocks provide a cross-cutting theme in biology and they are incredibly important. They influence everything, from the way growing sunflowers track the sun from east to west, to the migration timing of monarch butterflies, to the morning peaks in cardiac arrest in humans. […] Years of work underlie most scientific discoveries. Explaining these discoveries in a way that can be understood is not always easy. We have tried to keep the general reader in mind but in places perseverance on the part of the reader may be required. In the end we were guided by one of our reviewers, who said: ‘If you want to understand calculus you have to show the equations.’”

The above quote is from the book‘s foreword. I really liked this book and I was close to giving it five stars on goodreads. Below I have added some observations and links related to the first few chapters of the book’s coverage (as noted in my review on goodreads the second half of the book is somewhat technical, and I’ve not yet decided if I’ll be blogging that part of the book in much detail, if at all).

“There have been over a trillion dawns and dusks since life began some 3.8 billion years ago. […] This predictable daily solar cycle results in regular and profound changes in environmental light, temperature, and food availability as day follows night. Almost all life on earth, including humans, employs an internal biological timer to anticipate these daily changes. The possession of some form of clock permits organisms to optimize physiology and behaviour in advance of the varied demands of the day/night cycle. Organisms effectively ‘know’ the time of day. Such internally generated daily rhythms are called ‘circadian rhythms’ […] Circadian rhythms are embedded within the genomes of just about every plant, animal, fungus, algae, and even cyanobacteria […] Organisms that use circadian rhythms to anticipate the rotation of the earth are thought to have a major advantage over both their competitors and predators. For example, it takes about 20–30 minutes for the eyes of fish living among coral reefs to switch vision from the night to daytime state. A fish whose eyes are prepared in advance for the coming dawn can exploit the new environment immediately. The alternative would be to wait for the visual system to adapt and miss out on valuable activity time, or emerge into a world where it would be more difficult to avoid predators or catch prey until the eyes have adapted. Efficient use of time to maximize survival almost certainly provides a large selective advantage, and consequently all organisms seem to be led by such anticipation. A circadian clock also stops everything happening within an organism at the same time, ensuring that biological processes occur in the appropriate sequence or ‘temporal framework’. For cells to function properly they need the right materials in the right place at the right time. Thousands of genes have to be switched on and off in order and in harmony. […] All of these processes, and many others, take energy and all have to be timed to best effect by the millisecond, second, minute, day, and time of year. Without this internal temporal compartmentalization and its synchronization to the external environment our biology would be in chaos. […] However, to be biologically useful, these rhythms must be synchronized or entrained to the external environment, predominantly by the patterns of light produced by the earth’s rotation, but also by other rhythmic changes within the environment such as temperature, food availability, rainfall, and even predation. These entraining signals, or time-givers, are known as zeitgebers. The key point is that circadian rhythms are not driven by an external cycle but are generated internally, and then entrained so that they are synchronized to the external cycle.”

“It is worth emphasizing that the concept of an internal clock, as developed by Richter and Bünning, has been enormously powerful in furthering our understanding of biological processes in general, providing a link between our physiological understanding of homeostatic mechanisms, which try to maintain a constant internal environment despite unpredictable fluctuations in the external environment […], versus the circadian system which enables organisms to anticipate periodic changes in the external environment. The circadian system provides a predictive 24-hour baseline in physiological parameters, which is then either defended or temporarily overridden by homeostatic mechanisms that accommodate an acute environmental challenge. […] Zeitgebers and the entrainment pathway synchronize the internal day to the astronomical day, usually via the light/dark cycle, and multiple output rhythms in physiology and behaviour allow appropriately timed activity. The multitude of clocks within a multicellular organism can all potentially tick with a different phase angle […], but usually they are synchronized to each other and by a central pacemaker which is in turn entrained to the external world via appropriate zeitgebers. […] Most biological reactions vary greatly with temperature and show a Q10 temperature coefficient of about 2 […]. This means that the biological process or reaction rate doubles as a consequence of increasing the temperature by 10°C up to a maximum temperature at which the biological reaction stops. […] a 10°C temperature increase doubles muscle performance. By contrast, circadian rhythms exhibit a Q10 close to 1 […] Clocks without temperature compensation are useless. […] Although we know that circadian clocks show temperature compensation, and that this phenomenon is a conserved feature across all circadian rhythms, we have little idea how this is achieved.”

“The systematic study of circadian rhythms only really started in the 1950s, and the pioneering studies of Colin Pittendrigh brought coherence to this emerging new discipline. […] From [a] mass of emerging data, Pittendrigh had key insights and defined the essential properties of circadian rhythms across all life. Namely that: all circadian rhythms are endogenous and show near 24-hour rhythms in a biological process (biochemistry, physiology, or behaviour); they persist under constant conditions for several cycles; they are entrained to the astronomical day via synchronizing zeitgebers; and they show temperature compensation such that the period of the oscillation does not alter appreciably with changes in environmental temperature. Much of the research since the 1950s has been the translation of these formalisms into biological structures and processes, addressing such questions as: What is the clock and where is it located within the intracellular processes of the cell? How can a set of biochemical reactions produce a regular self-sustaining rhythm that persists under constant conditions and has a period of about 24 hours? How is this internal oscillation synchronized by zeitgebers such as light to the astronomical day? Why is the clock not altered by temperature, speeding up when the environment gets hotter and slowing down in the cold? How is the information of the near 24-hour rhythm communicated to the rest of the organism?”

“There have been hundreds of studies showing that a broad range of activities, both physical and cognitive, vary across the 24-hour day: tooth pain is lowest in the morning; proofreading is best performed in the evening; labour pains usually begin at night and most natural births occur in the early morning hours. The accuracy of short and long badminton serves is higher in the afternoon than in the morning and evening. Accuracy of first serves in tennis is better in the morning and afternoon than in the evening, although speed is higher in the evening than in the morning. Swimming velocity over 50 metres is higher in the evening than in the morning and afternoon. […] The majority of studies report that performance increases from morning to afternoon or evening. […] Typical ‘optimal’ times of day for physical or cognitive activity are gathered routinely from population studies […]. However, there is considerable individual variation. Peak performance will depend upon age, chronotype, time zone, and for behavioural tasks how many hours the participant has been awake when conducting the task, and even the nature of the task itself. As a general rule, the circadian modulation of cognitive functioning results in an improved performance over the day for younger adults, while in older subjects it deteriorates. […] On average the circadian rhythms of an individual in their late teens will be delayed by around two hours compared with an individual in their fifties. As a result the average teenager experiences considerable social jet lag, and asking a teenager to get up at 07.00 in the morning is the equivalent of asking a 50-year-old to get up at 05.00 in the morning.”

“Day versus night variations in blood pressure and heart rate are among the best-known circadian rhythms of physiology. In humans, there is a 24-hour variation in blood pressure with a sharp rise before awakening […]. Many cardiovascular events, such as sudden cardiac death, myocardial infarction, and stroke, display diurnal variations with an increased incidence between 06.00 and 12.00 in the morning. Both atrial and ventricular arrhythmias appear to exhibit circadian patterning as well, with a higher frequency during the day than at night. […] Myocardial infarction (MI) is two to three times more frequent in the morning than at night. In the early morning, the increased systolic blood pressure and heart rate results in an increased energy and oxygen demand by the heart, while the vascular tone of the coronary artery rises in the morning, resulting in a decreased coronary blood flow and oxygen supply. This mismatch between supply and demand underpins the high frequency of onset of MI. Plaque blockages are more likely to occur in the morning as platelet surface activation markers have a circadian pattern producing a peak of thrombus formation and platelet aggregation. The resulting hypercoagulability partially underlies the morning onset of MI.”

“A critical area where time of day matters to the individual is the optimum time to take medication, a branch of medicine that has been termed ‘chronotherapy’. Statins are a family of cholesterol-lowering drugs which inhibit HMGCR-reductase […] HMGCR is under circadian control and is highest at night. Hence those statins with a short half-life, such as simvastatin and lovastatin, are most effective when taken before bedtime. In another clinical domain entirely, recent studies have shown that anti-flu vaccinations given in the morning provoke a stronger immune response than those given in the afternoon. The idea of using chronotherapy to improve the efficacy of anti-cancer drugs has been around for the best part of 30 years. […] In experimental models more than thirty anti-cancer drugs have been found to vary in toxicity and efficacy by as much as 50 per cent as a function of time of administration. Although Lévi and others have shown the advantages to treating individual patients by different timing regimes, few hospitals have taken it up. One reason is that the best time to apply many of these treatments is late in the day or during the night, precisely when most hospitals lack the infrastructure and personnel to deliver such treatments.”

“Flying across multiple time zones and shift work has significant economic benefits, but the costs in terms of ill health are only now becoming clear. Sleep and circadian rhythm disruption (SCRD) is almost always associated with poor health. […] The impact of jet lag has long been known by elite athletes […] even when superbly fit individuals fly across time zones there is a very prolonged disturbance of circadian-driven rhythmic physiology. […] Horses also suffer from jet lag. […] Even bees can get jet lag. […] The misalignments that occur as a result of the occasional transmeridian flight are transient. Shift working represents a chronic misalignment. […] Nurses are one of the best-studied groups of night shift workers. Years of shift work in these individuals has been associated with a broad range of health problems including type II diabetes, gastrointestinal disorders, and even breast and colorectal cancers. Cancer risk increases with the number of years of shift work, the frequency of rotating work schedules, and the number of hours per week working at night [For people who are interested to know more about this, I previously covered a text devoted exclusively to these topics here and here.]. The correlations are so strong that shift work is now officially classified as ‘probably carcinogenic [Group 2A]’ by the World Health Organization. […] the partners and families of night shift workers need to be aware that mood swings, loss of empathy, and irritability are common features of working at night.”

“There are some seventy sleep disorders recognized by the medical community, of which four have been labelled as ‘circadian rhythm sleep disorders’ […] (1) Advanced sleep phase disorder (ASPD) […] is characterized by difficulty staying awake in the evening and difficulty staying asleep in the morning. Typically individuals go to bed and rise about three or more hours earlier than the societal norm. […] (2) Delayed sleep phase disorder (DSPD) is a far more frequent condition and is characterized by a 3-hour delay or more in sleep onset and offset and is a sleep pattern often found in some adolescents and young adults. […] ASPD and DSPD can be considered as pathological extremes of morning or evening preferences […] (3) Freerunning or non-24-hour sleep/wake rhythms occur in blind individuals who have either had their eyes completely removed or who have no neural connection from the retina to the brain. These people are not only visually blind but are also circadian blind. Because they have no means of detecting the synchronizing light signals they cannot reset their circadian rhythms, which freerun with a period of about 24 hours and 10 minutes. So, after six days, internal time is on average 1 hour behind environmental time. (4) Irregular sleep timing has been observed in individuals who lack a circadian clock as a result of a tumour in their anterior hypothalamus […]. Irregular sleep timing is [also] commonly found in older people suffering from dementia. It is an extremely important condition because one of the major factors in caring for those with dementia is the exhaustion of the carers which is often a consequence of the poor sleep patterns of those for whom they are caring. Various protocols have been attempted in nursing homes using increased light in the day areas and darkness in the bedrooms to try and consolidate sleep. Such approaches have been very successful in some individuals […] Although insomnia is the commonly used term to describe sleep disruption, technically insomnia is not a ‘circadian rhythm sleep disorder’ but rather a general term used to describe irregular or disrupted sleep. […] Insomnia is described as a ‘psychophysiological’ condition, in which mental and behavioural factors play predisposing, precipitating, and perpetuating roles. The factors include anxiety about sleep, maladaptive sleep habits, and the possibility of an underlying vulnerability in the sleep-regulating mechanism. […] Even normal ‘healthy ageing’ is associated with both circadian rhythm sleep disorders and insomnia. Both the generation and regulation of circadian rhythms have been shown to become less robust with age, with blunted amplitudes and abnormal phasing of key physiological processes such as core body temperature, metabolic processes, and hormone release. Part of the explanation may relate to a reduced light signal to the clock […]. In the elderly, the photoreceptors of the eye are often exposed to less light because of the development of cataracts and other age-related eye disease. Both these factors have been correlated with increased SCRD.”

“Circadian rhythm research has mushroomed in the past twenty years, and has provided a much greater understanding of the impact of both imposed and illness-related SCRD. We now appreciate that our increasingly 24/7 society and social disregard for biological time is having a major impact upon our health. Understanding has also been gained about the relationship between SCRD and a spectrum of different illnesses. SCRD in illness is not simply the inconvenience of being unable to sleep at an appropriate time but is an agent that exacerbates or causes serious health problems.”

Links:

Circadian rhythm.
Acrophase.
Phase (waves). Phase angle.
Jean-Jacques d’Ortous de Mairan.
Heliotropism.
Kymograph.
John Harrison.
Munich Chronotype Questionnaire.
Chronotype.
Seasonal affective disorder. Light therapy.
Parkinson’s disease. Multiple sclerosis.
Melatonin.

August 25, 2018 Posted by | Biology, Books, Cancer/oncology, Cardiology, Medicine | Leave a comment