Econstudentlog

Some observations on a cryptographic problem

It’s been a long time since I last posted one of these sort of ‘rootless’ posts which are not based on a specific book or a specific lecture or something along those lines, but a question on r/science made me think about these topics and start writing a bit about it, and I decided I might as well add my thoughts and ideas here.

The reddit question which motivated me to write this post was this one: “Is it difficult to determine the password for an encryption if you are given both the encrypted and unencrypted message?

By “difficult” I mean requiring an inordinate amount of computation. If given both an encrypted and unencrypted file/message, is it reasonable to be able to recover the password that was used to encrypt the file/message?”

Judging from the way the question is worded, the inquirer obviously knows very little about these topics, but that was part of what motivated me when I started out writing; s/he quite obviously has a faulty model of how this kind of stuff actually works, and just by virtue of the way he or she asks his/her question s/he illustrates some ways in which s/he gets things wrong.

When I decided to transfer my efforts towards discussing these topics to the blog I also implicitly decided against using language that would be expected to be easily comprehensible for the original inquirer, as s/he was no longer in the target group and there’s a cost to using that kind of language when discussing technical matters. I’ve sort of tried to make this post both useful and readable to people not all that familiar with the related fields, but I tend to find it difficult to evaluate the extent to which I’ve succeeded when I try to do things like that.

I decided against adding stuff already commented on when I started out writing this, so I’ll not e.g. repeat noiwontfixyourpc’s reply below. However I have added some other observations that seem to me to be relevant and worth mentioning to people who might consider asking a similar question to the one the original inquirer asked in that thread:

i. Finding a way to make plaintext turn into cipher text (…or cipher text into plaintext; and no, these two things are not actually always equivalent, see below…) is a very different (and in many contexts a much easier problem) than finding out the actual encryption scheme that is at work producing the text strings you observe. There can be many, many different ways to go from a specific sample of plaintext to a specific sample of ciphertext, and most of the solutions won’t work if you’re faced with a new piece of ciphertext; especially not if the original samples are small, so only a small amount of (potential) information would be expected to be included in the text strings.

If you only get a small amount of plaintext and corresponding cipher text you may decide that algorithm A is the one that was applied to the message, even if the algorithm actually applied was a more complex algorithm, B. To illustrate in a very simple way how this might happen, A might be a particular case of B, because B is a superset of A and a large number of other potential encryption algorithms applied in the encryption scheme B (…or the encryption scheme C, because B also happens to be a subset of C, or… etc.). In such a context A might be an encryption scheme/approach that perhaps only applies in very specific contexts; for example (part of) the coding algorithm might have been to decide that ‘on next Tuesday, we’ll use this specific algorithm to translate plaintext into cipher text, and we’ll never use that specific translation-/mapping algorithm (which may be but one component of the encryption algorithm) again’. If such a situation applies then you’re faced with the problem that even if your rule ‘worked’ in that particular instance, in terms of translating your plaintext into cipher text and vice versa, it only ‘worked’ because you blindly fitted the two data-sets in a way that looked right, even if you actually had no idea how the coding scheme really worked (you only guessed A, not B, and in this particular instance A’s never actually going to happen again).

On a more general level some of the above comments incidentally in my view quite obviously links to results from classical statistics; there are many ways to link random variables through data fitting methods, but reliably identifying proper causal linkages through the application of such approaches is, well, difficult (and, according to some, often ill-advised)…

ii. In my view, it does not seem possible in general to prove that any specific proposed encryption/decryption algorithm is ‘the correct one’. This is because the proposed algorithm will never be a unique solution to the problem you’re evaluating. How are you going to convince me that The True Algorithm is not a more general/complex one (or perhaps a completely different one – see iii. below) than the one you propose, and that your solution is missing relevant variables? The only way to truly test if the proposed algorithm is a valid algorithm is to test it on new data and compare its performance on this new data set with the performances of competing variables who also managed to link cipher text and plaintext. If the algorithm doesn’t work on the new data, you got it wrong. If it does work on new data, well, you might still just have been lucky. You might get more confident with more correctly-assessed (…guessed?) data, but you never get certain. In other similar contexts a not uncommon approach for trying to get around these sorts of problems is to limit the analysis to a subset of the data available in order to obtain the algorithm, and then using the rest of the data for validation purposes (here’s a relevant link), but here even with highly efficient estimation approaches you almost certainly will run out of information (/degrees of freedom) long before you get anywhere if the encryption algorithm is at all non-trivial. In these settings information is likely to be a limiting resource.

iii. There are many different types of encryption schemes, and people who ask questions like the one above tend, I believe, to have a quite limited view of which methods and approaches are truly available to one who desires secrecy when exchanging information with others. Imagine a situation where the plaintext is ‘See you next Wednesday’ and the encrypted text is an English translation of Tolstoy’s book War and Peace (or, to make it even more fun, all pages published on the English version of Wikipedia, say on November the 5th, 2017 at midnight GMT). That’s an available encryption approach that might be applied. It might be a part (‘A’) of a larger (‘B’) encryption approach of linking specific messages from a preconceived list of messages, which had been considered worth sending in the future when the algorithm was chosen, to specific book titles decided on in advance. So if you want to say ‘good Sunday!’, Eve gets to read the Bible and see where that gets her. You could also decide that in half of all cases the book cipher text links to specific messages from a list but in the other half of the cases what you actually mean to communicate is on page 21 of the book; this might throw a hacker who saw a combined cipher text and plaintext combination resulting from that part of the algorithm off in terms of the other half, and vice versa – and it illustrates well one of the key problems you’re faced with as an attacker when working on cryptographic schemes about which you have limited knowledge; the opponent can always add new layers on top of the ones that already exist/apply to make the problem harder to solve. And so you could also link the specific list message with some really complicated cipher-encrypted version of the Bible. There’s a lot more to encryption schemes than just exchanging a few letters here and there. On related topics, see this link. On a different if related topic, people who desire secrecy when exchanging information may also attempt to try to hide the fact that any secrets are exchanged in the first place. See also this.

iv. The specific usage of the word ‘password’ in the original query calls for comment for multiple reasons, some of which have been touched upon above, perhaps mainly because it implicitly betrays a lack of knowledge about how modern cryptographic systems actually work. The thing is, even if you might consider an encryption scheme to just be an advanced sort of ‘password’, finding the password (singular) is not always the task you’re faced with today. In symmetric-key algorithm settings you might sort-of-kind-of argue that it sort-of is – in such settings you might say that you have one single (collection of) key(s) which you use to encrypt messages and also use to decrypt the messages. So you can both encrypt and decrypt the message using the same key(s), and so you only have one ‘password’. That’s however not how asymmetric-key encryption works. As wiki puts it: “In an asymmetric key encryption scheme, anyone can encrypt messages using the public key, but only the holder of the paired private key can decrypt.”

This of course relates to what you actually want to do/achieve when you get your samples of cipher text and plaintext. In some cryptographic contexts by design the route you need to to go to get from cipher text to plaintext is conceptually different from the route you need to go to get from plaintext to cipher text. And some of the ‘passwords’ that relate to how the schemes work are public knowledge by design.

v. I have already touched a bit upon the problem of the existence of an information constraint, but I realized I probably need to spell this out in a bit more detail. The original inquirer to me seems implicitly to be under the misapprehension that computational complexity is the only limiting constraint here (“By “difficult” I mean requiring an inordinate amount of computation.”). Given the setting he or she proposes, I don’t think that’s true, and why that is is sort of interesting.

If you think about what kind of problem you’re facing, what you have here in this setting is really a very limited amount of data which relates in an unknown manner to an unknown data-generating process (‘algorithm’). There are, as has been touched upon, in general many ways to obtain linkage between two data sets (the cipher text and the plaintext) using an algorithm – too many ways for comfort, actually. The search space is large, there are too many algorithms to consider; or equivalently, the amount of information supplied by the data will often be too small for us to properly evaluate the algorithms under consideration. An important observation is that more complex algorithms will both take longer to calculate (‘identify’ …at least as candidates) and be expected to require more data to evaluate, at least to the extent that algorithmic complexity constrains the data (/relates to changes in data structure/composition that needs to be modeled in order to evaluate/identify the goal algorithm). If the algorithm says a different encryption rule is at work on Wednesdays, you’re going to have trouble figuring that out if you only got hold of a cipher text/plaintext combination derived from an exchange which took place on a Saturday. There are methods from statistics that might conceivably help you deal with problems like these, but they have their own issues and trade-offs. You might limit yourself to considering only settings where you have access to all known plaintext and cipher text combinations, so you got both Wednesday and Saturday, but even here you can’t be safe – next (metaphorical, I probably at this point need to add) Friday might be different from last (metaphorical) Friday, and this could even be baked into the algorithm in very non-obvious ways.

The above remarks might give you the idea that I’m just coming up with these kinds of suggestions to try to foil your approaches to figuring out the algorithm ‘by cheating’ (…it shouldn’t matter whether or not it was ‘sent on a Saturday’), but the main point is that a complex encryption algorithm is complex, and even if you see it applied multiple times you might not get enough information about how it works from the data suggested to be able to evaluate if you guessed right. In fact, given a combination of a sparse data set (one message, or just a few messages, in plaintext and cipher text) and a complex algorithm involving a very non-obvious mapping function, the odds are strongly against you.

vi. I had the thought that one reason why the inquirer might be confused about some of these things is that s/he might well be aware of the existence of modern cryptographic techniques which do rely to a significant extent on computational complexity aspects. I.e., here you do have settings where you’re asked to provide ‘the right answer’ (‘the password’), but it’s hard to calculate the right answer in a reasonable amount of time unless you have the relevant (private) information at hand – see e.g. these links for more. One way to think about how such a problem relates to the other problem at hand (you have been presented with samples of cipher text and plaintext and you want to guess all the details about how the encryption and decryption schemes which were applied work) is that this kind of algorithm/approach may be applied in combination with other algorithmic approaches to encrypt/decrypt the text you’re analyzing. A really tough prime factorization problem might for all we know be an embedded component of the cryptographic process that is applied to our text. We could call it A.

In such a situation we would definitely be in trouble because stuff like prime factorization is really hard and computationally complex, and to make matters worse just looking at the plaintext and the cipher text would not make it obvious to us that a prime factorization scheme had even been applied to the data. But a really important point is that even if such a tough problem was not present and even if only relatively less computationally demanding problems were involved, we almost certainly still just wouldn’t have enough information to break any semi-decent encryption algorithm based on a small sample of plaintext and cipher text. It might help a little bit, but in the setting contemplated by the inquirer a ‘faster computer’ (/…’more efficient decision algorithm’, etc.) can only help so much.

vii. Shannon and Kerckhoffs may have a point in a general setting, but in specific settings like this particular one I think it is well worth taking into account the implications of not having a (publicly) known algorithm to attack. As wiki notes (see the previous link), ‘Many ciphers are actually based on publicly known algorithms or are open source and so it is only the difficulty of obtaining the key that determines security of the system’. The above remarks were of course all based on an assumption that Eve does not here have the sort of knowledge about the encryption scheme applied that she in many cases today actually might have. There are obvious and well-known weaknesses associated with having security-associated components of a specific cryptographic scheme be independent of the key, but I do not see how it does not in this particular setting cause search space blow-up making the decision problem (did we actually guess right?) intractable in many cases. A key feature of the problem considered by the inquirer is that you here – unlike in many ‘guess the password-settings’ where for example a correct password will allow you access to an application or a document or whatever – do not get any feedback neither in the case where you guess right nor in the case where you guess wrong; it’s a decision problem, not a calculation problem. (However it is perhaps worth noting on the other hand that in a ‘standard guess-the-password-problem’ you may also sometimes implicitly face a similar decision problem due to e.g. the potential for a combination of cryptographic security and steganographic complementary strategies like e.g. these having been applied).

Advertisements

August 14, 2018 Posted by | Computer science, Cryptography, Data, rambling nonsense, Statistics | Leave a comment

Personal Relationships… (II)

Some more observations from the book below:

Coworker support, or the processes by which coworkers provide assistance with tasks, information, or empathy, has long been considered an important construct in the stress and strain literature […] Social support fits the conservation of resources theory definition of a resource, and it is commonly viewed in that light […]. Support from coworkers helps employees meet the demands of their job, thus making strain less likely […]. In a sense, social support is the currency upon which social exchanges are based. […] The personality of coworkers can play an important role in the development of positive coworker relationships. For example, there is ample evidence that suggests that those higher in conscientiousness and agreeableness are more likely to help coworkers […] Further, similarity in personality between coworkers (e.g., coworkers who are similar in their conscientiousness) draws coworkers together into closer relationships […] cross-sex relationships appear to be managed in a different manner than same-sex relationships. […] members of cross-sex friendships fear the misinterpretation of their relationship by those outside the relationship as a sexual relationship rather than platonic […] a key goal of partners in a cross-sex workplace friendship becomes convincing “third parties that the friendship is authentic.” As a result, cross-sex workplace friends will intentionally limit the intimacy of their communication or limit their non-work-related communication to situations perceived to demonstrate a nonsexual relationship, such as socializing with a cross-sex friend only in the presence of his or her spouse […] demographic dissimilarity in age and race can reduce the likelihood of positive coworker relationships. Chattopadhyay (1999) found that greater dissimilarity among group members on age and race were associated with less collegial relationships among coworkers, which was subsequently associated with less altruistic behavior […] Sias and Cahill (1998) found that a variety of situational characteristics, both inside and outside the workplace setting, helps to predict the development of workplace friendship. For example, they found that factors outside the workplace, such as shared outside interests (e.g., similar hobbies), life events (e.g., having a child), and the simple passing of time can lead to a greater likelihood of a friendship developing. Moreover, internal workplace characteristics, including working together on tasks, physical proximity within the office, a common problem or enemy, and significant amounts of “downtime” that allow for greater socialization, also support friendship development in the workplace (see also Fine, 1986).”

“To build knowledge, employees need to be willing to learn and try new things. Positive relationships are associated with a higher willingness to engage in learning and experimentation […] and, importantly, sharing of that new knowledge to benefit others […] Knowledge sharing is dependent on high-quality communication between relational partners […] Positive relationships are characterized by less defensive communication when relational partners provide feedback (e.g., a suggestion for a better way to accomplish a task; Roberts, 2007). In a coworker context, this would involve accepting help from coworkers without putting up barriers to that help (e.g., nonverbal cues that the help is not appreciated or welcome). […] A recent meta-analysis by Chiaburu and Harrison (2008) found that coworker support was associated with higher performance and higher organizational citizenship behavior (both directed at individuals and directed at the organization broadly). These relationships held whether performance was self- or supervisor related […] Chiaburu and Harrison (2008) also found that coworker support was associated with higher satisfaction and organizational commitment […] Positive coworker exchanges are also associated with lower levels of employee withdrawal, including absenteeism, intention to turnover, and actual turnover […]. To some extent, these relationships may result from norms within the workplace, as coworkers help to set standards for behavior and not “being there” for other coworkers, particularly in situations where the work is highly interdependent, may be considered a significant violation of social norms within a positive working environment […] Perhaps not surprisingly, given the proximity and the amount of time spent with coworkers, workplace friendships will occasionally develop into romances and, potentially, marriages. While still small, the literature on married coworkers suggests that they experience a number of benefits, including lower emotional exhaustion […] and more effective coping strategies […] Married coworkers are an interesting population to examine, largely because their work and family roles are so highly integrated […]. As a result, both resources and demands are more likely to spill over between the work and family role for married coworkers […] Janning and Neely (2006) found that married coworkers were more likely to talk about work-related issues while at home than married couples that had no work-related link.”

Negative exchanges [between coworkers] are characterized by behaviors that are generally undesirable, disrespectful, and harmful to the focal employee or employees. Scholars have found that these negative exchanges influence the same outcomes as positive, supporting exchanges, but in opposite directions. For instance, in their recent meta-analysis of 161 independent studies, Chiaburu and Harrison (2008) found that antagonistic coworker exchanges are negatively related to job satisfaction, organizational commitment, and task performance and positively related to absenteeism, intent to quit, turnover, and counterproductive work behaviors. Unfortunately, despite the recent popularity of the negative exchange research, this literature still lacks construct clarity and definitional precision. […] Because these behaviors have generally referred to acts that impact both coworkers and the organization as a whole, much of this work fails to distinguish social interactions targeting specific individuals within the organization from the nonsocial behaviors explicitly targeting the overall organization. This is unfortunate given that coworker-focused actions and organization-focused actions represent unique dimensions of organizational behavior […] negative exchanges are likely to be preceded by certain antecedents. […] Antecedents may stem from characteristics of the enactor, of the target, or of the context in which the behaviors occur. For example, to the extent that enactors are low on socially relevant personality traits such as agreeableness, emotional stability, or extraversion […], they may be more prone to initiate a negative exchange. Likewise, an enactor who is a high Machiavellian may initiate a negative exchange with the goal of gaining power or establishing control over the target. Antagonistic behaviors may also occur as reciprocation for a previous attack (real or imagined) or as a proactive deterrent against a potential future negative behavior from the target. Similarly, enactors may initiate antagonism based on their perceptions of a coworker’s behavioral characteristics such as suboptimal productivity or weak work ethic. […] The reward system can also play a role as an antecedent condition for antagonism. When coworkers are highly interdependent and receive rewards based on the performance of the group as opposed to each individual, the incidence of antagonism may increase when there is substantial variance in performance among coworkers.”

“[E]mpirical evidence suggests that some people have certain traits that make them more vulnerable to coworker attacks. For example, employees with low self-esteem, low emotional stability, high introversion, or high submissiveness are more inclined to be the recipients of negative coworker behaviors […]. Furthermore, research also shows that people who engage in negative behaviors are likely to also become the targets of these behaviors […] Two of the most commonly studied workplace attitudes are employee job satisfaction […] and affective organizational commitment […] Chiaburu and Harrison (2008) linked general coworker antagonism with both attitudes. Further, the specific behaviors of bullying and incivility have also been found to adversely affect both job satisfaction and organizational commitment […]. A variety of behavioral outcomes have also been identified as outcomes of coworker antagonism. Withdrawal behaviors such as absenteeism, intention to quit, turnover, effort reduction […] are typical responses […] those who have been targeted by aggression are more likely to engage in aggression. […] Feelings of anger, fear, and negative mood have also been shown to mediate the effects of interpersonal mistreatment on behaviors such as withdrawal and turnover […] [T]he combination of enactor and target characteristics is likely to play an antecedent role to these exchanges. For instance, research in the diversity area suggests that people tend to be more comfortable around those with whom they are similar and less comfortable around people with whom they are dissimilar […] there may be a greater incidence of coworker antagonism in more highly diverse settings than in settings characterized by less diversity. […] research has suggested that antagonistic behaviors, while harmful to the target or focal employee, may actually be beneficial to the enactor of the exchange. […] Krischer, Penney, and Hunter (2010) recently found that certain types of counterproductive work behaviors targeting the organization may actually provide employees with a coping mechanism that ultimately reduces their level of emotional exhaustion.”

CWB [counterproductive work behaviors] toward others is composed of volitional acts that harm people at work; in our discussion this would refer to coworkers. […] person-oriented organizational citizenship behaviors (OCB; Organ, 1988) consist of behaviors that help others in the workplace. This might include sharing job knowledge with a coworker or helping a coworker who had too much to do […] Social support is often divided into the two forms of emotional support that helps people deal with negative feelings in response to demanding situations versus instrumental support that provides tangible aid in directly dealing with work demands […] one might expect that instrumental social support would be more strongly related to positive exchanges and positive relationships. […] coworker social support […] has [however] been shown to relate to strains (burnout) in a meta-analysis (Halbesleben, 2006). […] Griffin et al. suggested that low levels of the Five Factor Model […] dimensions of agreeableness, emotional stability, and extraversion might all contribute to negative behaviors. Support can be found for the connection between two of these personality characteristics and CWB. […] Berry, Ones, and Sackett (2007) showed in their meta-analysis that person-focused CWB (they used the term deviance) had significant mean correlations of –.20 with emotional stability and –.36 with agreeableness […] there was a significant relationship with conscientiousness (r = –.19). Thus, agreeable, conscientious, and emotionally stable individuals are less likely to engage in CWB directed toward people and would be expected to have fewer negative exchanges and better relationships with coworkers. […] Halbesleben […] suggests that individuals high on the Five Factor Model […] dimensions of agreeableness and conscientiousness would have more positive exchanges because they are more likely to engage in helping behavior. […] a meta-analysis has shown that both of these personality variables relate to the altruism factor of OCB in the direction expected […]. Specifically, the mean correlations of OCB were .13 for agreeableness and .22 for conscientiousness. Thus, individuals high on these two personality dimensions should have more positive coworker exchanges.”

There is a long history of research in social psychology supporting the idea that people tend to be attracted to, bond, and form friendships with others they believe to be similar […], and this is true whether the similarity is rooted in demographics that are fairly easy to observe […] or in attitudes, beliefs, and values that are more difficult to observe […] Social network scholars refer to this phenomenon as homophily, or the notion that “similarity breeds connection” […] although evidence of homophily has been found to exist in many different types of relationships, including marriage, frequency of communication, and career support, it is perhaps most evident in the formation of friendships […] We extend this line of research and propose that, in a team context that provides opportunities for tie formation, greater levels of perceived similarity among team members will be positively associated with the number of friendship ties among team members. […] A chief function of friendship ties is to provide an outlet for individuals to disclose and manage emotions. […] friendship is understood as a form of support that is not related to work tasks directly; rather, it is a “backstage resource” that allows employees to cope with demands by creating distance between them and their work roles […]. Thus, we propose that friendship network ties will be especially important in providing the type of coping resources that should foster team member well-being. Unfortunately, however, friendship network ties negatively impact team members’ ability to focus on their work tasks, and, in turn, this detracts from taskwork. […] When friends discuss nonwork topics, these individuals will be distracted from work tasks and will be exposed to off-task information exchanged in informal relationships that is irrelevant for performing one’s job. Additionally, distractions can hinder individuals’ ability to become completely engaged in their work (Jett & George).”

Although teams are designed to meet important goals for both companies and their employees, not all team members work together well.
Teams are frequently “cruel to their members” […] through a variety of negative team member exchanges (NTMEs) including mobbing, bullying, incivility, social undermining, and sexual harassment. […] Team membership offers identity […], stability, and security — positive feelings that often elevate work teams to powerful positions in employees’ lives […], so that members are acutely aware of how their teammates treat them. […] NTMEs may evoke stronger emotional, attitudinal, and behavioral consequences than negative encounters with nonteam members. In brief, team members who are targeted for NTMEs are likely to experience profound threats to personal identity, security, and stability […] when a team member targets another for negative interpersonal treatment, the target is likely to perceive that the entire group is behind the attack rather than the specific instigator alone […] Studies have found that NTMEs […] are associated with poor psychological outcomes such as depression; undesirable work attitudes such as low affective commitment, job dissatisfaction, and low organization-based self-esteem; and counterproductive behaviors such as deviance, job withdrawal, and unethical behavior […] Some initial evidence has also indicated that perceptions of rejection mediate the effects of NTMEs on target outcomes […] Perceptions of the comparative treatment of other team members are an important factor in reactions to NTMEs […]. When targets perceive they are “singled out,” NTMEs will cause more pronounced effects […] A significant body of literature has suggested that individuals guide their own behaviors through environmental social cues that they glean from observing the norms and values of others. Thus, the negative effects of NTMEs may extend beyond the specific targets; NTMEs can spread contagiously to other team members […]. The more interdependent the social actors in the team setting, the stronger and more salient will be the social cues […] [There] is evidence that as team members see others enacting NTMEs, their inhibitions against such behaviors are lowered.”

August 13, 2018 Posted by | Books, Psychology | Leave a comment

Promoting the unknown…

i.

ii.

iii.

iv.

v.

August 10, 2018 Posted by | Music | Leave a comment

Personal Relationships… (I)

“Across subdisciplines of psychology, research finds that positive, fulfilling, and satisfying relationships contribute to life satisfaction, psychological health, and physical well-being whereas negative, destructive, and unsatisfying relationships have a whole host of detrimental psychological and physical effects. This is because humans possess a fundamental “need to belong” […], characterized by the motivation to form and maintain lasting, positive, and significant relationships with others. The need to belong is fueled by frequent and pleasant relational exchanges with others and thwarted when one feels excluded, rejected, and hurt by others. […] This book uses research and theory on the need to belong as a foundation to explore how five different types of relationships influence employee attitudes, behaviors, and well-being. They include relationships with supervisors, coworkers, team members, customers, and individuals in one’s nonwork life. […] This book is written for a scientist–practitioner audience and targeted to both researchers and human resource management professionals. The contributors highlight both theoretical and practical implications in their respective chapters, with a common emphasis on how to create and sustain an organizational climate that values positive relationships and deters negative interpersonal experiences. Due to the breadth of topics covered in this edited volume, the book is also appropriate for advanced specialty undergraduate or graduate courses on I/O psychology, human resource management, and organizational behavior.”

The kind of stuff covered in books like this one relates closely to social stuff I lack knowledge about and/or is just not very good at handling. I don’t think too highly of this book’s coverage so far, but that’s at least partly due to the kinds of topics covered – it is what it is.

Below I have added some quotes from the first few chapters of the book.

“Work relationships are important to study in that they can exert a strong influence on employees’ attitudes and behaviors […].The research evidence is robust and consistent; positive relational interactions at work are associated with more favorable work attitudes, less work-related strain, and greater well-being (for reviews see Dutton & Ragins, 2007; Grant & Parker, 2009). On the other side of the social ledger, negative relational interactions at work induce greater strain reactions, create negative affective reactions, and reduce well-being […]. The relationship science literature is clear, social connection has a causal effect on individual health and well-being”.

“[One] way to view relationships is to consider the different dimensions by which relationships vary. An array of dimensions that underlie relationships has been proposed […] Affective tone reflects the degree of positive and negative feelings and emotions within the relationship […] Relationships and groups marked by greater positive affective tone convey more enthusiasm, excitement, and elation for each other, while relationships consisting of more negative affective tone express more fear, distress, and scorn. […] Emotional carrying capacity refers to the extent that the relationship can handle the expression of a full range of negative and position emotions as well as the quantity of emotion expressed […]. High-quality relationships have the ability to withstand the expression of more emotion and a greater variety of emotion […] Interdependence involves ongoing chains of mutual influence between two people […]. Degree of relationship interdependency is reflected through frequency, strength, and span of influence. […] A high degree of interdependence is commonly thought to be one of the hallmarks of a close relationship […] Intimacy is composed of two fundamental components: self-disclosure and partner responsiveness […]. Responsiveness involves the extent that relationship partners understand, validate, and care for one another. Disclosure refers to verbal communications of personally relevant information, thoughts, and feelings. Divulging more emotionally charged information of a highly personal nature is associated with greater intimacy […]. Disclosure tends to proceed from the superficial to the more intimate and expands in breadth over time […] Power refers to the degree that dominance shapes the relationship […] relationships marked by a power differential are more likely to involve unidirectional interactions. Equivalent power tends to facilitate bidirectional exchanges […] Tensility is the extent that the relationship can bend and endure strain in the face of challenges and setbacks […]. Relationship tensility contributes to psychological safety within the relationship. […] Trust is the belief that relationship partners can be depended upon and care about their partner’s needs and interests […] Relationships that include a great deal of trust are stronger and more resilient. A breach of trust can be one of the most difficult relationships challenges to overcome (Pratt & dirks, 2007).”

“Relationships are separate entities from the individuals involved in the relationships. The relationship unit (typically a dyad) operates at a different level of analysis from the individual unit. […] For those who conduct research on groups or organizations, it is clear that operations at a group level […] operate at a different level than individual psychology, and it is not merely the aggregate of the individuals involved in the relationship. […] operations at one level (e.g., relationships) can influence behavior at the other level (e.g., individual). […] relationships are best thought of as existing at their own level of analysis, but one that interacts with other levels of analysis, such as individual and group or cultural levels. Relationships cannot be reduced to the actions of the individuals in them or the social structures where they reside but instead interact with the individual and group processes in interesting ways to produce behaviors. […] it is challenging to assess causality via experimental procedures when studying relationships. […] Experimental procedures are crucial for making inferences of causation but are particularly difficult in the case of relationships because it is tough to manipulate many important relationships (e.g., love, marriage, sibling relationships). […] relationships are difficult to observe at the very beginning and at the end, so methods have been developed to facilitate this.”

“[T]he organizational research could […] benefit from the use of theoretical models from the broader relationships literature. […] Interdependence theory is hardly ever seen in organizations. There was some fascinating work in this area a few decades ago, especially in interdependence theory with the investment model […]. This work focused on the precursors of commitment in the workplace and found that, like romantic relationships, the variables of satisfaction, investments, and alternatives played key roles in this process. The result is that when satisfaction and investments are high and alternative opportunities are low, commitment is high. However, it also means that if investments are sufficiently high and alternatives are sufficiently low, then satisfaction can by lowered and commitment will remain high — hence, the investment model is useful for understanding exploitation (Rusbult, Campbell, & Price, 1990).”

“Because they cross formal levels in the organizational hierarchy, supervisory relationships necessarily involve an imbalance in formal power. […] A review by Keltner, Gruenfeld, and Anderson (2003) suggests that power affects how people experience emotions, whether they attend more to rewards or threats, how they process information, and the extent to which they inhibit their behavior around others. The literature clearly suggests that power influences affect, cognition, and behavior in ways that might tend to constrain the formation of positive relationships between individuals with varying degrees of power. […] The power literature is clear in showing that more powerful individuals attend less to their social context, including the people in it, than do less powerful individuals, and the literature suggests that supervisors (compared with subordinates) might tend to place less value on the relationship and be less attuned to their partner’s needs. Yet the formal power accorded to supervisors by the organization — via the supervisory role — is accompanied by the role prescribed responsibility for the performance, motivation, and well-being of subordinates. Thus, the accountability for the formation of a positive supervisory relationship lies more heavily with the supervisor. […] As we examine the qualities of positive supervisory relationships, we make a clear distinction between effective supervisory behaviors and positive supervisory relationships. This is an important distinction […] a large body of leadership research has focused on traits or behaviors of supervisors […] and the affective, motivational, and behavioral responses of employees to those behaviors, with little attention paid to the interactions between the two. There are two practical implications of moving the focus from individuals to relationships: (1) supervisors who use “effective” leadership behaviors may or may not have positive relationships with employees; and (2) supervisors who have a positive relationship with one employee may not have equally positive relationships with other employees, even if they use the same “effective” behaviors.”

There is a large and well-developed stream of research that focuses explicitly on exchanges between supervisors and the employees who report directly to them. Leader–member exchange theory addresses the various types of functional relationships that can be formed between supervisors and subordinates. A core assumption of LMX theory is that supervisors do not have the time or resources to develop equally positive relationships with all subordinates. Thus, to minimize their investment and yield the greatest results for the organization, supervisors would develop close relationships with only a few subordinates […] These few high-quality relationships are marked by high levels of trust, loyalty, and support, whereas the balance of supervisory relationships are contractual in nature and depends on timely rewards allotted by supervisors in direct exchange for desirable behaviors […] There has been considerable confusion and debate in the literature about LMX theory and the construct validity of LMX measures […] Despite shortcomings in LMX research, it is [however] clear that supervisors form relationships of varying quality with subordinates […] Among factors associated with high LMX are the supervisor’s level of agreeableness […] and the employee’s level of extraversion […], feedback seeking […], and (negatively) negative affectivity […]. Those who perceived similarity in terms of family, money, career strategies, goals in life, education […], and gender […] also reported high LMX. […] Employee LMX is strongly related to attitudes, such as job satisfaction […] Supporting the notion that a positive supervisory relationship is good for employees, the LMX literature is replete with studies linking high LMX with thriving and autonomous motivation. […] The premise of the LMX research is that supervisory resources are limited and high-quality relationships are demanding. Thus, supervisor will be most effective when they allocate their resources efficiently and effectively, forming some high-quality and some instrumental relationships. But the empirical research from the lMX literature provides little (if any) evidence that supervisors who differentiate are more effective”.

The norm of negative reciprocity obligates targets of harm to reciprocate with actions that produce roughly equivalent levels of harm — if someone is unkind to me, I should be approximately as unkind to him or her. […] But the trajectory of negative reciprocity differs in important ways when there are power asymmetries between the parties involved in a negative exchange relationship. The workplace revenge literature suggests that low-power targets of hostility generally withhold retaliatory acts. […] In exchange relationships where one actor is more dependent on the other for valued resources, the dependent/less powerful actor’s ability to satisfy his or her self-interests will be constrained […]. Subordinate targets of supervisor hostility should therefore be less able (than supervisor targets of subordinate hostility) to return the injuries they sustain […] To the extent subordinate contributions to negative exchanges are likely to trigger disciplinary responses by the supervisor target (e.g., reprimands, demotion, transfer, or termination), we can expect that subordinates will withhold negative reciprocity.”

“In the last dozen years, much has been learned about the contributions that supervisors make to negative exchanges with subordinates. […] Several dozen studies have examined the consequences of supervisor contributions to negative exchanges. This work suggests that exposure to supervisor hostility is negatively related to subordinates’ satisfaction with the job […], affective commitment to the organization […], and both in-role and extra-role performance contributions […] and is positively related to subordinates’ psychological distress […], problem drinking […], and unit-level counterproductive work behavior […]. Exposure to supervisor hostility has also been linked with family undermining behavior — employees who are the targets of abusive supervision are more likely to be hostile toward their own family members […] Most studies of supervisor hostility have accounted for moderating factors — individual and situational factors that buffer or exacerbate the effects of exposure. For example, Tepper (2000) found that the injurious effects of supervisor hostility on employees’ attitudes and strain reactions were stronger when subordinates have less job mobility and therefore feel trapped in jobs that deplete their coping resources. […] Duffy, Ganster, Shaw, Johnson, and Pagon (2006) found that the effects of supervisor hostility are more pronounced when subordinates are singled out rather than targeted along with multiple coworkers. […] work suggests that the effects of abusive supervision on subordinates’ strain reactions are weaker when subordinates employ impression management strategies […] and more confrontational (as opposed to avoidant) communication tactics […]. It is clear that not all subordinates react the same way to supervisor hostility and characteristics of subordinates and the context influence the trajectory of subordinates’ responses. […] In a meta-analytic examination of studies of the correlates of supervisor-directed hostility, Herschovis et al. (2007) found support for the idea that subordinates who believe that they have been the target of mistreatment are more likely to lash out at their supervisors. […] perhaps just as interesting as the associations that have been uncovered are several hypothesized associations that have not emerged. Greenberg and Barling (1999) found that supervisor-directed aggression was unrelated to subordinates’ alcohol consumption, history of aggression, and job security. Other work has revealed mixed results for the prediction that subordinate self-esteem will negatively predict supervisor-directed hostility (Inness, Barling, & Turner, 2005). […] Negative exchanges between supervisors and subordinates do not play out in isolation — others observe them and are affected by them. Yet little is known about the affective, cognitive, and behavioral responses of third parties to negative exchanges with supervisors.”

August 8, 2018 Posted by | Books, Psychology | Leave a comment

Combinatorics (I)

This book is not a particularly easy read, compared to what is the general format of the series in which it is published, but this is a good thing in my view as it also means the author managed to go into enough details in specific contexts to touch upon at least some properties/topics of interest. You don’t need any specific background knowledge to read and understand the book – at least not any sort of background knowledge one would not expect someone who might decide to read a book like this one to already have – but you do need when reading it to have the sort of mental surplus that enables you to think carefully about what’s going on and devote a few mental resources to understanding the details.

Some quotes and links from the first half of the book below.

“The subject of combinatorial analysis or combinatorics […] [w]e may loosely describe [as] the branch of mathematics concerned with selecting, arranging, constructing, classifying, and counting or listing things. […] the subject involves finite sets or discrete elements that proceed in separate steps […] rather than continuous systems […] Mathematicians sometimes use the term ‘combinatorics’ to refer to a larger subset of discrete mathematics that includes graph theory. In that case, what is commonly called combinatorics is then referred to as ‘enumeration’. […] Combinatorics now includes a wide range of topics, some of which we cover in this book, such as the geometry of tilings and polyhedra […], the theory of graphs […], magic squares and latin squares […], block designs and finite projective planes […], and partitions of numbers […]. [The] chapters [of the book] are largely independent of each other and can be read in any order. Much of combinatorics originated in recreational pastimes […] in recent years the subject has developed in depth and variety and has increasingly become a part of mainstream mathematics. […] Undoubtedly part of the reason for the subject’s recent importance has arisen from the growth of computer science and the increasing use of algorithmic methods for solving real-world practical problems. These have led to combinatorial applications in a wide range of subject areas, both within and outside mathematics, including network analysis, coding theory, probability, virology, experimental design, scheduling, and operations research.”

“[C]ombinatorics is primarily concerned with four types of problem:
Existence problem: Does □□□ exist?
Construction problem: If □□□ exists, how can we construct it?
Enumeration problem: How many □□□ are there?
Optimization problem: Which □□□ is best? […]
[T]hese types of problems are not unrelated; for example, the easiest way to prove that something exists may be to construct it explicitly.”

“In this book we consider two types of enumeration problem – counting problems in which we simply wish to know the number of objects involved, and listing problems in which we want to list them all explicitly. […] It’s useful to have some basic counting rules […] In what follows, all the sets are finite. […] In general we have the following rule; here, subsets are disjoint if they have no objects in common: Addition rule: To find the number of objects in a set, split the set into disjoint subsets, count the objects in each subset, and add the results. […] Subtraction rule: If a set of objects can be split into two subsets A and B, then the number of objects in B is obtained by subtracting the number of objects in A from the number in the whole set. […] The subtraction rule extends easily to sets that are split into more than two subsets with no elements in common. […] the inclusion-exclusion principle […] extends this simple idea to the situation where the subsets may have objects in common. […] In general we have the following result: Multiplication rule: If a counting problem can be split into stages with several options at each stage, then the total number of possibilities is the product of options at each stage. […] Another useful principle in combinatorics is the following: Correspondence rule: We can solve a counting problem if we can put the objects to be counted in one-to-one correspondence with the objects of a set that we have already counted. […] We conclude this section with one more rule: Division rule: If a set of n elements can be split into m disjoint subsets, each of size k, then m = n / k.”

“Every algorithm has a running time […] this may be the time that a computer needs to carry out all the necessary calculations, or the actual number of such calculations. Each problem [also] has an input size […] the running time T usually depends on the input size n. Particularly important, because they’re the most efficient, are the polynomial-time algorithms, where the maximum running time is proportional to a power of the input size […] The collection of all polynomial-time algorithms is called P. […] In contrast, there are inefficient algorithms that don’t take polynomial time, such as the exponential-time algorithms […] At this point we introduce NP, the set of ‘non-deterministic polynomial-time problems’. These are algorithms for which a solution, when given, can be checked in polynomial time. Clearly P is contained in NP, since if a problem can be solved in polynomial time then a solution can certainly be checked in polynomial time – checking solutions is far easier than finding them in the first place. But are they the same? […] Few people people believe that the answer is ‘yes’, but no one has been able to prove that P ≠ NP. […] a problem is NP-complete if its solution in polynomial time means that every NP problem can be solved in polynomial time. […] If there were a polynomial algorithm for just one of them, then polynomial algorithms would exist for the whole lot and P would equal NP. On the other hand, if just one of them has no polynomial algorithm, then none of the others could have a polynomial algorithm either, and P would be different from NP.”

“In how many different ways can n objects be arranged? […] generally, we have the following result: Arrangements: The number of arrangements of n objects is n x (n -1) x (n – 2) x … x 3 x 2 x 1. This number is called n factorial and is denoted by n!. […] The word permutation is used in different ways. We’ll use it to mean an ordered selection without repetition, while others may use it to mean an arrangement […] generally, we have the following rule: Ordered selections without repetition (permutations): If we select k items from a set of n objects, and if the selections are ordered and repetition is not allowed, then the number of possible selections is n x (n – 1) x (n – 2) x … x (n – k +1). We denote this expression by P(n,k). […] Since P(n, n) = n x (n -1) x (n – 2) x … x 3 x 2 x 1 = n!, an arrangement is a permutation for which k = n. […] generally, we have the following result: P(n,k) = n! /(n-k)!. […] unordered selections without repetition are called combinations, giving rise to the words combinatorial and combinatorics. […] generally, we have the following result: Unordered selections without repetition (combinations): If we select k items from a set of n objects, and if the selections are unordered and repetition is not allowed, then the number of possible selections is P(n,k)/k! = n x (n-1) x (n-2) x … x (n – k + 1)/k!. We denote this expression by C(n,k) […] Unordered selections with repetition: If we select k items from a set of n objects, and if the selections are unordered and repetition is allowed, then the number of possible selections is C(n + k – 1, k). […] Combination rule 1: For any numbers k and n with n, C(n,k) = C(n,n-k) […] Combination rule 2: For any numbers n and k with n, C(n, n-k) = n!/(n-k)!(n-(n-k))! = n!/(n-k)!k! = C(n,k). […] Combination rule 3: For any number n, C(n,0) + C(n,1) + C(n,2) + … + C(n,n-1) + C(n,n) = 2n

Links:

Tilings/Tessellation.
Knight’s tour.
Seven Bridges of Königsberg problem.
Three utilities problem.
Four color theorem.
Tarry’s algorithm (p.7) (formulated slightly differently in the book, but it’s the same algorithm).
Polyomino.
Arthur Cayley.
Combinatorial principles.
Minimum connector problem.
Travelling salesman problem.
Algorithmic efficiency. Running time/time complexity.
Boolean satisfiability problem. Cook–Levin theorem.
Combination.
Mersenne primes.
Permutation. Factorial. Stirling’s formula.
Birthday problem.
Varāhamihira.
Manhattan distance.
Fibonacci number.
Pascal’s triangle. Binomial coefficient. Binomial theorem.
Pigeonhole principle.
Venn diagram.
Derangement (combinatorial mathematics).
Tower of Hanoi.
Stable marriage problem. Transversal (combinatorics). Hall’s marriage theorem.
Generating function (the topic covered in the book more specifically is related to a symbolic generator of the subsets of a set, but a brief search yielded no good links to this particular topic – US).
Group theory.
Ferdinand Frobenius. Burnside’s lemma.

August 4, 2018 Posted by | Books, Computer science, Mathematics | 1 Comment

Words

The words below are mostly words which I encountered while reading the books Pocket oncology, Djinn Rummy, Open Sesame, and The Far Side of the World.

Hematochezia. Neuromyotonia. Anoproctitis. Travelator. Brassica. Physiatry. Clivus. Curettage. Colposcopy. Trachelectomy. Photopheresis. Myelophthisis. Apheresis. Vexilloid. Gonfalon. Eutectic. Clerisy. Frippery. Scrip. Bludge.

Illude. Empyrean. Bonzer. Vol-au-vent. Curule. Entrechat. Winceyette. Attar. Woodbine. Corolla. Rennet. Gusset. Jacquard. Antipodean. Chaplet. Thrush. Coloratura. Biryani. Caff. Scrummy.

Beatific. Forecourt. Hurtle. Freemartin. Coleoptera. Hemipode. Bespeak. Dickey. Bilbo. Hale. Grampus. Calenture. Reeve. Cribbing. Fleam. Totipalmate. Bonito. Blackstrake/Black strake. Shank. Caiman.

Chancery. Acullico. Thole. Aorist. Westing. Scorbutic. Voyol. Fribble. Terraqueous. Oviparous. Specktioneer. Aprication. Phalarope. Lough. Hoy. Reel. Trachyte. Woulding. Anthropophagy. Risorgimento.

 

August 2, 2018 Posted by | Books, Language | Leave a comment

Quotes

i. “Progress in science is often built on wrong theories that are later corrected. It is better to be wrong than to be vague.” (Freeman Dyson)

ii. “The teacher’s equipment gives him an everlasting job. His work is never done. His getting ready for this work is never quite complete.” (George Trumbull Ladd)

iii. “The crust of our earth is a great cemetery, where the rocks are tombstones on which the buried dead have written their own epitaphs.” (Louis Agassiz)

iv. “Fortunately science, like that nature to which it belongs, is neither limited by time nor by space. It belongs to the world, and is of no country and of no age. The more we know, the more we feel our ignorance […] there are always new worlds to conquer.” (Humphrey Davy)

v. “Nothing is so fatal to the progress of the human mind as to suppose that our views of science are ultimate; that there are no mysteries in nature; that our triumphs are complete, and that there are no new worlds to conquer.” (-ll-)

vi. “The best way to learn Japanese is to be born as a Japanese baby, in Japan, raised by a Japanese family.” (Dave Barry)

vii. “What makes a date so dreadful is the weight of expectation attached to it. There is every chance that you may meet your soulmate, get married, have children and be buried side by side. There is an equal chance that the person you meet will look as if they’ve already been buried for some time.” (Guy Browning)

viii. “Always judge your fellow passengers to be the opposite of what they strive to appear to be. […] men never affect to be what they are, but what they are not.” (Thomas Chandler Haliburton)

ix. “Some folks can look so busy doin’ nothin’ that they seem indispensable.” (Kin Hubbard)

x. “Men are not punished for their sins, but by them.” (-ll-)

xi. “Do what we will, we always, more or less, construct our own universe. The history of science may be described as the history of the attempts, and the failures, of men “to see things as they are.”” (Matthew Moncrieff Pattison Muir)

xii. “You simply cannot invent any conspiracy theory so ridiculous and obviously satirical that some people somewhere don’t already believe it.” (Robert Anton Wilson)

xiii. “You know you are getting old when work is a lot less fun and fun is a lot more work.” (Joan Rivers)

xiv. “When I was a little boy, I used to pray every night for a new bicycle. Then I realised, the Lord, in his wisdom, doesn’t work that way. So I just stole one and asked Him to forgive me.” (Emo Philips)

xv. “I was walking down Fifth Avenue today and I found a wallet, and I was gonna keep it, rather than return it, but I thought: “Well, if I lost a hundred and fifty dollars, how would I feel?” And I realized I would want to be taught a lesson.” (-ll-)

xvi. “When I said I was going to become a comedian, they all laughed. Well, they’re not laughing now, are they?” (Robert Monkhouse)

xvii. “Things said in embarrassment and anger are seldom the truth, but are said to hurt and wound the other person. Once said, they can never be taken back.” (Lucille Ball)

xviii. “The beginning of wisdom for a programmer is to recognize the difference between getting his program to work and getting it right. A program which does not work is undoubtedly wrong; but a program which does work is not necessarily right. It may still be wrong because it is hard to understand; or because it is hard to maintain as the problem requirements change; or because its structure is different from the structure of the problem; or because we cannot be sure that it does indeed work.” (Michael Anthony Jackson)

xix. “One of the difficulties in thinking about software is its huge variety. A function definition in a spreadsheet cell is software. A smartphone app is software. The flight management system for an Airbus A380 is software. A word processor is software. We shouldn’t expect a single discipline of software engineering to cover all of these, any more than we expect a single discipline of manufacturing to cover everything from the Airbus A380 to the production of chocolate bars, or a single discipline of social organization to cover everything from the United Nations to a kindergarten. Improvement in software engineering must come bottom-up, from intense specialized attention to particular products.” (-ll-)

xx. “Let the world know you as you are, not as you think you should be, because sooner or later, if you are posing, you will forget the pose, and then where are you?” (Fanny Brice)

July 30, 2018 Posted by | Quotes/aphorisms | Leave a comment

Lyapunov Arguments in Optimization

I’d say that if you’re interested in the intersection of mathematical optimization methods/-algorithms and dynamical systems analysis it’s probably a talk well worth watching. The lecture is reasonably high-level and covers a fairly satisfactory amount of ground in a relatively short amount of time, and it is not particularly hard to follow if you have at least some passing familiarity with the fields involved (dynamical systems analysis, statistics, mathematical optimization, computer science/machine learning).

Some links:

Dynamical system.
Euler–Lagrange equation.
Continuous optimization problem.
Gradient descent algorithm.
Lyapunov stability.
Condition number.
Fast (/accelerated-) gradient descent methods.
The Mirror Descent Algorithm.
Cubic regularization of Newton method and its global performance (Nesterov & Polyak).
A Differential Equation for Modeling Nesterov’s Accelerated Gradient Method: Theory and Insights (Su, Boyd & Candès).
A Variational Perspective on Accelerated Methods in Optimization (Wibisono, Wilson & Jordan).
Breaking Locality Accelerates Block Gauss-Seidel (Tu, Venkataraman, Wilson, Gittens, Jordan & Recht).
A Lyapunov Analysis of Momentum Methods in Optimization (Wilson, Recht & Jordan).
Bregman divergence.
Estimate sequence methods.
Variance reduction techniques.
Stochastic gradient descent.
Langevin dynamics.

 

July 22, 2018 Posted by | Computer science, Lectures, Mathematics, Physics, Statistics | Leave a comment

Big Data (II)

Below I have added a few observation from the last half of the book, as well as some coverage-related links to topics of interest.

“With big data, using correlation creates […] problems. If we consider a massive dataset, algorithms can be written that, when applied, return a large number of spurious correlations that are totally independent of the views, opinions, or hypotheses of any human being. Problems arise with false correlations — for example, divorce rate and margarine consumption […]. [W]hen the number of variables becomes large, the number of spurious correlations also increases. This is one of the main problems associated with trying to extract useful information from big data, because in doing so, as with mining big data, we are usually looking for patterns and correlations. […] one of the reasons Google Flu Trends failed in its predictions was because of these problems. […] The Google Flu Trends project hinged on the known result that there is a high correlation between the number of flu-related online searches and visits to the doctor’s surgery. If a lot of people in a particular area are searching for flu-related information online, it might then be possible to predict the spread of flu cases to adjoining areas. Since the interest is in finding trends, the data can be anonymized and hence no consent from individuals is required. Using their five-year accumulation of data, which they limited to the same time-frame as the CDC data, and so collected only during the flu season, Google counted the weekly occurrence of each of the fifty million most common search queries covering all subjects. These search query counts were then compared with the CDC flu data, and those with the highest correlation were used in the flu trends model. […] The historical data provided a baseline from which to assess current flu activity on the chosen search terms and by comparing the new real-time data against this, a classification on a scale from 1 to 5, where 5 signified the most severe, was established. Used in the 2011–12 and 2012–13 US flu seasons, Google’s big data algorithm famously failed to deliver. After the flu season ended, its predictions were checked against the CDC’s actual data. […] the Google Flu Trends algorithm over-predicted the number of flu cases by at least 50 per cent during the years it was used.” [For more details on why blind/mindless hypothesis testing/p-value hunting on big data sets is usually a terrible idea, see e.g. Burnham & Anderson, US]

“The data Google used [in the Google Flu Trends algorithm], collected selectively from search engine queries, produced results [with] obvious bias […] for example by eliminating everyone who does not use a computer and everyone using other search engines. Another issue that may have led to poor results was that customers searching Google on ‘flu symptoms’ would probably have explored a number of flu-related websites, resulting in their being counted several times and thus inflating the numbers. In addition, search behaviour changes over time, especially during an epidemic, and this should be taken into account by updating the model regularly. Once errors in prediction start to occur, they tend to cascade, which is what happened with the Google Flu Trends predictions: one week’s errors were passed along to the next week. […] [Similarly,] the Ebola prediction figures published by WHO [during the West African Ebola virus epidemic] were over 50 per cent higher than the cases actually recorded. The problems with both the Google Flu Trends and Ebola analyses were similar in that the prediction algorithms used were based only on initial data and did not take into account changing conditions. Essentially, each of these models assumed that the number of cases would continue to grow at the same rate in the future as they had before the medical intervention began. Clearly, medical and public health measures could be expected to have positive effects and these had not been integrated into the model.”

“Every time a patient visits a doctor’s office or hospital, electronic data is routinely collected. Electronic health records constitute legal documentation of a patient’s healthcare contacts: details such as patient history, medications prescribed, and test results are recorded. Electronic health records may also include sensor data such as Magnetic Resonance Imaging (MRI) scans. The data may be anonymized and pooled for research purposes. It is estimated that in 2015, an average hospital in the USA will store over 600 Tb of data, most of which is unstructured. […] Typically, the human genome contains about 20,000 genes and mapping such a genome requires about 100 Gb of data. […] The interdisciplinary field of bioinformatics has flourished as a consequence of the need to manage and analyze the big data generated by genomics. […] Cloud-based systems give authorized users access to data anywhere in the world. To take just one example, the NHS plans to make patient records available via smartphone by 2018. These developments will inevitably generate more attacks on the data they employ, and considerable effort will need to be expended in the development of effective security methods to ensure the safety of that data. […] There is no absolute certainty on the Web. Since e-documents can be modified and updated without the author’s knowledge, they can easily be manipulated. This situation could be extremely damaging in many different situations, such as the possibility of someone tampering with electronic medical records. […] [S]ome of the problems facing big data systems [include] ensuring they actually work as intended, [that they] can be fixed when they break down, and [that they] are tamper-proof and accessible only to those with the correct authorization.”

“With transactions being made through sales and auction bids, eBay generates approximately 50 Tb of data a day, collected from every search, sale, and bid made on their website by a claimed 160 million active users in 190 countries. […] Amazon collects vast amounts of data including addresses, payment information, and details of everything an individual has ever looked at or bought from them. Amazon uses its data in order to encourage the customer to spend more money with them by trying to do as much of the customer’s market research as possible. In the case of books, for example, Amazon needs to provide not only a huge selection but to focus recommendations on the individual customer. […] Many customers use smartphones with GPS capability, allowing Amazon to collect data showing time and location. This substantial amount of data is used to construct customer profiles allowing similar individuals and their recommendations to be matched. Since 2013, Amazon has been selling customer metadata to advertisers in order to promote their Web services operation […] Netflix collects and uses huge amounts of data to improve customer service, such as offering recommendations to individual customers while endeavouring to provide reliable streaming of its movies. Recommendation is at the heart of the Netflix business model and most of its business is driven by the data-based recommendations it is able to offer customers. Netflix now tracks what you watch, what you browse, what you search for, and the day and time you do all these things. It also records whether you are using an iPad, TV, or something else. […] As well as collecting search data and star ratings, Netflix can now keep records on how often users pause or fast forward, and whether or not they finish watching each programme they start. They also monitor how, when, and where they watched the programme, and a host of other variables too numerous to mention.”

“Data science is becoming a popular study option in universities but graduates so far have been unable to meet the demands of commerce and industry, where positions in data science offer high salaries to experienced applicants. Big data for commercial enterprises is concerned with profit, and disillusionment will set in quickly if an over-burdened data analyst with insufficient experience fails to deliver the expected positive results. All too often, firms are asking for a one-size-fits-all model of data scientist who is expected to be competent in everything from statistical analysis to data storage and data security.”

“In December 2016, Yahoo! announced that a data breach involving over one billion user accounts had occurred in August 2013. Dubbed the biggest ever cyber theft of personal data, or at least the biggest ever divulged by any company, thieves apparently used forged cookies, which allowed them access to accounts without the need for passwords. This followed the disclosure of an attack on Yahoo! in 2014, when 500 million accounts were compromised. […] The list of big data security breaches increases almost daily. Data theft, data ransom, and data sabotage are major concerns in a data-centric world. There have been many scares regarding the security and ownership of personal digital data. Before the digital age we used to keep photos in albums and negatives were our backup. After that, we stored our photos electronically on a hard-drive in our computer. This could possibly fail and we were wise to have back-ups but at least the files were not publicly accessible. Many of us now store data in the Cloud. […] If you store all your photos in the Cloud, it’s highly unlikely with today’s sophisticated systems that you would lose them. On the other hand, if you want to delete something, maybe a photo or video, it becomes difficult to ensure all copies have been deleted. Essentially you have to rely on your provider to do this. Another important issue is controlling who has access to the photos and other data you have uploaded to the Cloud. […] although the Internet and Cloud-based computing are generally thought of as wireless, they are anything but; data is transmitted through fibre-optic cables laid under the oceans. Nearly all digital communication between continents is transmitted in this way. My email will be sent via transatlantic fibre-optic cables, even if I am using a Cloud computing service. The Cloud, an attractive buzz word, conjures up images of satellites sending data across the world, but in reality Cloud services are firmly rooted in a distributed network of data centres providing Internet access, largely through cables. Fibre-optic cables provide the fastest means of data transmission and so are generally preferable to satellites.”

Links:

Health care informatics.
Electronic health records.
European influenza surveillance network.
Overfitting.
Public Health Emergency of International Concern.
Virtual Physiological Human project.
Watson (computer).
Natural language processing.
Anthem medical data breach.
Electronic delay storage automatic calculator (EDSAC). LEO (computer). ICL (International Computers Limited).
E-commerce. Online shopping.
Pay-per-click advertising model. Google AdWords. Click fraud. Targeted advertising.
Recommender system. Collaborative filtering.
Anticipatory shipping.
BlackPOS Malware.
Data Encryption Standard algorithm. EFF DES cracker.
Advanced Encryption Standard.
Tempora. PRISM (surveillance program). Edward Snowden. WikiLeaks. Tor (anonymity network). Silk Road (marketplace). Deep web. Internet of Things.
Songdo International Business District. Smart City.
United Nations Global Pulse.

July 19, 2018 Posted by | Books, Computer science, Cryptography, Data, Engineering, Epidemiology, Statistics | Leave a comment

Developmental Biology (II)

Below I have included some quotes from the middle chapters of the book and some links related to the topic coverage. As I already pointed out earlier, this is an excellent book on these topics.

Germ cells have three key functions: the preservation of the genetic integrity of the germline; the generation of genetic diversity; and the transmission of genetic information to the next generation. In all but the simplest animals, the cells of the germline are the only cells that can give rise to a new organism. So, unlike body cells, which eventually all die, germ cells in a sense outlive the bodies that produced them. They are, therefore, very special cells […] In order that the number of chromosomes is kept constant from generation to generation, germ cells are produced by a specialized type of cell division, called meiosis, which halves the chromosome number. Unless this reduction by meiosis occurred, the number of chromosomes would double each time the egg was fertilized. Germ cells thus contain a single copy of each chromosome and are called haploid, whereas germ-cell precursor cells and the other somatic cells of the body contain two copies and are called diploid. The halving of chromosome number at meiosis means that when egg and sperm come together at fertilization, the diploid number of chromosomes is restored. […] An important property of germ cells is that they remain pluripotent—able to give rise to all the different types of cells in the body. Nevertheless, eggs and sperm in mammals have certain genes differentially switched off during germ-cell development by a process known as genomic imprinting […] Certain genes in eggs and sperm are imprinted, so that the activity of the same gene is different depending on whether it is of maternal or paternal origin. Improper imprinting can lead to developmental abnormalities in humans. At least 80 imprinted genes have been identified in mammals, and some are involved in growth control. […] A number of developmental disorders in humans are associated with imprinted genes. Infants with Prader-Willi syndrome fail to thrive and later can become extremely obese; they also show mental retardation and mental disturbances […] Angelman syndrome results in severe motor and mental retardation. Beckwith-Wiedemann syndrome is due to a generalized disruption of imprinting on a region of chromosome 7 and leads to excessive foetal overgrowth and an increased predisposition to cancer.”

“Sperm are motile cells, typically designed for activating the egg and delivering their nucleus into the egg cytoplasm. They essentially consist of a nucleus, mitochondria to provide an energy source, and a flagellum for movement. The sperm contributes virtually nothing to the organism other than its chromosomes. In mammals, sperm mitochondria are destroyed following fertilization, and so all mitochondria in the animal are of maternal origin. […] Different organisms have different ways of ensuring fertilization by only one sperm. […] Early development is similar in both male and female mammalian embryos, with sexual differences only appearing at later stages. The development of the individual as either male or female is genetically fixed at fertilization by the chromosomal content of the egg and sperm that fuse to form the fertilized egg. […] Each sperm carries either an X or Y chromosome, while the egg has an X. The genetic sex of a mammal is thus established at the moment of conception, when the sperm introduces either an X or a Y chromosome into the egg. […] In the absence of a Y chromosome, the default development of tissues is along the female pathway. […] Unlike animals, plants do not set aside germ cells in the embryo and germ cells are only specified when a flower develops. Any meristem cell can, in principle, give rise to a germ cell of either sex, and there are no sex chromosomes. The great majority of flowering plants give rise to flowers that contain both male and female sexual organs, in which meiosis occurs. The male sexual organs are the stamens; these produce pollen, which contains the male gamete nuclei corresponding to the sperm of animals. At the centre of the flower are the female sex organs, which consist of an ovary of two carpels, which contain the ovules. Each ovule contains an egg cell.”

“The character of specialized cells such as nerve, muscle, or skin is the result of a particular pattern of gene activity that determines which proteins are synthesized. There are more than 200 clearly recognizable differentiated cell types in mammals. How these particular patterns of gene activity develop is a central question in cell differentiation. Gene expression is under a complex set of controls that include the actions of transcription factors, and chemical modification of DNA. External signals play a key role in differentiation by triggering intracellular signalling pathways that affect gene expression. […] the central feature of cell differentiation is a change in gene expression, which brings about a change in the proteins in the cells. The genes expressed in a differentiated cell include not only those for a wide range of ‘housekeeping’ proteins, such as the enzymes involved in energy metabolism, but also genes encoding cell-specific proteins that characterize a fully differentiated cell: hemoglobin in red blood cells, keratin in skin epidermal cells, and muscle-specific actin and myosin protein filaments in muscle. […] several thousand different genes are active in any given cell in the embryo at any one time, though only a small number of these may be involved in specifying cell fate or differentiation. […] Cell differentiation is known to be controlled by a wide range of external signals but it is important to remember that, while these external signals are often referred to as being ‘instructive’, they are ‘selective’, in the sense that the number of developmental options open to a cell at any given time is limited. These options are set by the cell’s internal state which, in turn, reflects its developmental history. External signals cannot, for example, convert an endodermal cell into a muscle or nerve cell. Most of the molecules that act as developmentally important signals between cells during development are proteins or peptides, and their effect is usually to induce a change in gene expression. […] The same external signals can be used again and again with different effects because the cells’ histories are different. […] At least 1,000 different transcription factors are encoded in the genomes of the fly and the nematode, and as many as 3,000 in the human genome. On average, around five different transcription factors act together at a control region […] In general, it can be assumed that activation of each gene involves a unique combination of transcription factors.”

“Stem cells involve some special features in relation to differentiation. A single stem cell can divide to produce two daughter cells, one of which remains a stem cell while the other gives rise to a lineage of differentiating cells. This occurs in our skin and gut all the time and also in the production of blood cells. It also occurs in the embryo. […] Embryonic stem (ES) cells from the inner cell mass of the early mammalian embryo when the primitive streak forms, can, in culture, differentiate into a wide variety of cell types, and have potential uses in regenerative medicine. […] it is now possible to make adult body cells into stem cells, which has important implications for regenerative medicine. […] The goal of regenerative medicine is to restore the structure and function of damaged or diseased tissues. As stem cells can proliferate and differentiate into a wide range of cell types, they are strong candidates for use in cell-replacement therapy, the restoration of tissue function by the introduction of new healthy cells. […] The generation of insulin-producing pancreatic β cells from ES cells to replace those destroyed in type 1 diabetes is a prime medical target. Treatments that direct the differentiation of ES cells towards making endoderm derivatives such as pancreatic cells have been particularly difficult to find. […] The neurodegenerative Parkinson disease is another medical target. […] To generate […] stem cells of the patient’s own tissue type would be a great advantage, and the recent development of induced pluripotent stem cells (iPS cells) offers […] exciting new opportunities. […] There is [however] risk of tumour induction in patients undergoing cell-replacement therapy with ES cells or iPS cells; undifferentiated pluripotent cells introduced into the patient could cause tumours. Only stringent selection procedures that ensure no undifferentiated cells are present in the transplanted cell population will overcome this problem. And it is not yet clear how stable differentiated ES cells and iPS cells will be in the long term.”

“In general, the success rate of cloning by body-cell nuclear transfer in mammals is low, and the reasons for this are not yet well understood. […] Most cloned mammals derived from nuclear transplantation are usually abnormal in some way. The cause of failure is incomplete reprogramming of the donor nucleus to remove all the earlier modifications. A related cause of abnormality may be that the reprogrammed genes have not gone through the normal imprinting process that occurs during germ-cell development, where different genes are silenced in the male and female parents. The abnormalities in adults that do develop from cloned embryos include early death, limb deformities and hypertension in cattle, and immune impairment in mice. All these defects are thought to be due to abnormalities of gene expression that arise from the cloning process. Studies have shown that some 5% of the genes in cloned mice are not correctly expressed and that almost half of the imprinted genes are incorrectly expressed.”

“Organ development involves large numbers of genes and, because of this complexity, general principles can be quite difficult to distinguish. Nevertheless, many of the mechanisms used in organogenesis are similar to those of earlier development, and certain signals are used again and again. Pattern formation in development in a variety of organs can be specified by position information, which is specified by a gradient in some property. […] Not surprisingly, the vascular system, including blood vessels and blood cells, is among the first organ systems to develop in vertebrate embryos, so that oxygen and nutrients can be delivered to the rapidly developing tissues. The defining cell type of the vascular system is the endothelial cell, which forms the lining of the entire circulatory system, including the heart, veins, and arteries. Blood vessels are formed by endothelial cells and these vessels are then covered by connective tissue and smooth muscle cells. Arteries and veins are defined by the direction of blood flow as well as by structural and functional differences; the cells are specified as arterial or venous before they form blood vessels but they can switch identity. […] Differentiation of the vascular cells requires the growth factor VEGF (vascular endothelial growth factor) and its receptors, and VEGF stimulates their proliferation. Expression of the Vegf gene is induced by lack of oxygen and thus an active organ using up oxygen promotes its own vascularization. New blood capillaries are formed by sprouting from pre-existing blood vessels and proliferation of cells at the tip of the sprout. […] During their development, blood vessels navigate along specific paths towards their targets […]. Many solid tumours produce VEGF and other growth factors that stimulate vascular development and so promote the tumour’s growth, and blocking new vessel formation is thus a means of reducing tumour growth. […] In humans, about 1 in 100 live-born infants has some congenital heart malformation, while in utero, heart malformation leading to death of the embryo occurs in between 5 and 10% of conceptions.”

“Separation of the digits […] is due to the programmed cell death of the cells between these digits’ cartilaginous elements. The webbed feet of ducks and other waterfowl are simply the result of less cell death between the digits. […] the death of cells between the digits is essential for separating the digits. The development of the vertebrate nervous system also involves the death of large numbers of neurons.”

Links:

Budding.
Gonad.
Down Syndrome.
Fertilization. In vitro fertilisation. Preimplantation genetic diagnosis.
SRY gene.
X-inactivation. Dosage compensation.
Cellular differentiation.
MyoD.
Signal transduction. Enhancer (genetics).
Epigenetics.
Hematopoiesis. Hematopoietic stem cell transplantation. Hemoglobin. Sickle cell anemia.
Skin. Dermis. Fibroblast. Epidermis.
Skeletal muscle. Myogenesis. Myoblast.
Cloning. Dolly.
Organogenesis.
Limb development. Limb bud. Progress zone model. Apical ectodermal ridge. Polarizing region/Zone of polarizing activity. Sonic hedgehog.
Imaginal disc. Pax6. Aniridia. Neural tube.
Branching morphogenesis.
Pistil.
ABC model of flower development.

July 16, 2018 Posted by | Biology, Books, Botany, Cancer/oncology, Diabetes, Genetics, Medicine, Molecular biology, Ophthalmology | Leave a comment

Big Data (I?)

Below a few observations from the first half of the book, as well as some links related to the topic coverage.

“The data we derive from the Web can be classified as structured, unstructured, or semi-structured. […] Carefully structured and tabulated data is relatively easy to manage and is amenable to statistical analysis, indeed until recently statistical analysis methods could be applied only to structured data. In contrast, unstructured data is not so easily categorized, and includes photos, videos, tweets, and word-processing documents. Once the use of the World Wide Web became widespread, it transpired that many such potential sources of information remained inaccessible because they lacked the structure needed for existing analytical techniques to be applied. However, by identifying key features, data that appears at first sight to be unstructured may not be completely without structure. Emails, for example, contain structured metadata in the heading as well as the actual unstructured message […] and so may be classified as semi-structured data. Metadata tags, which are essentially descriptive references, can be used to add some structure to unstructured data. […] Dealing with unstructured data is challenging: since it cannot be stored in traditional databases or spreadsheets, special tools have had to be developed to extract useful information. […] Approximately 80 per cent of the world’s data is unstructured in the form of text, photos, and images, and so is not amenable to the traditional methods of structured data analysis. ‘Big data’ is now used to refer not just to the total amount of data generated and stored electronically, but also to specific datasets that are large in both size and complexity, with which new algorithmic techniques are required in order to extract useful information from them.”

“In the digital age we are no longer entirely dependent on samples, since we can often collect all the data we need on entire populations. But the size of these increasingly large sets of data cannot alone provide a definition for the term ‘big data’ — we must include complexity in any definition. Instead of carefully constructed samples of ‘small data’ we are now dealing with huge amounts of data that has not been collected with any specific questions in mind and is often unstructured. In order to characterize the key features that make data big and move towards a definition of the term, Doug Laney, writing in 2001, proposed using the three ‘v’s: volume, variety, and velocity. […] ‘Volume’ refers to the amount of electronic data that is now collected and stored, which is growing at an ever-increasing rate. Big data is big, but how big? […] Generally, we can say the volume criterion is met if the dataset is such that we cannot collect, store, and analyse it using traditional computing and statistical methods. […] Although a great variety of data [exists], ultimately it can all be classified as structured, unstructured, or semi-structured. […] Velocity is necessarily connected with volume: the faster the data is generated, the more there is. […] Velocity also refers to the speed at which data is electronically processed. For example, sensor data, such as that generated by an autonomous car, is necessarily generated in real time. If the car is to work reliably, the data […] must be analysed very quickly […] Variability may be considered as an additional dimension of the velocity concept, referring to the changing rates in flow of data […] computer systems are more prone to failure [during peak flow periods]. […] As well as the original three ‘v’s suggested by Laney, we may add ‘veracity’ as a fourth. Veracity refers to the quality of the data being collected. […] Taken together, the four main characteristics of big data – volume, variety, velocity, and veracity – present a considerable challenge in data management.” [As regular readers of this blog might be aware, not everybody would agree with the author here about the inclusion of veracity as a defining feature of big data – “Many have suggested that there are more V’s that are important to the big data problem [than volume, variety & velocity] such as veracity and value (IEEE BigData 2013). Veracity refers to the trustworthiness of the data, and value refers to the value that the data adds to creating knowledge about a topic or situation. While we agree that these are important data characteristics, we do not see these as key features that distinguish big data from regular data. It is important to evaluate the veracity and value of all data, both big and small. (Knoth & Schmid)]

“Anyone who uses a personal computer, laptop, or smartphone accesses data stored in a database. Structured data, such as bank statements and electronic address books, are stored in a relational database. In order to manage all this structured data, a relational database management system (RDBMS) is used to create, maintain, access, and manipulate the data. […] Once […] the database [has been] constructed we can populate it with data and interrogate it using structured query language (SQL). […] An important aspect of relational database design involves a process called normalization which includes reducing data duplication to a minimum and hence reduces storage requirements. This allows speedier queries, but even so as the volume of data increases the performance of these traditional databases decreases. The problem is one of scalability. Since relational databases are essentially designed to run on just one server, as more and more data is added they become slow and unreliable. The only way to achieve scalability is to add more computing power, which has its limits. This is known as vertical scalability. So although structured data is usually stored and managed in an RDBMS, when the data is big, say in terabytes or petabytes and beyond, the RDBMS no longer works efficiently, even for structured data. An important feature of relational databases and a good reason for continuing to use them is that they conform to the following group of properties: atomicity, consistency, isolation, and durability, usually known as ACID. Atomicity ensures that incomplete transactions cannot update the database; consistency excludes invalid data; isolation ensures one transaction does not interfere with another transaction; and durability means that the database must update before the next transaction is carried out. All these are desirable properties but storing and accessing big data, which is mostly unstructured, requires a different approach. […] given the current data explosion there has been intensive research into new storage and management techniques. In order to store these massive datasets, data is distributed across servers. As the number of servers involved increases, the chance of failure at some point also increases, so it is important to have multiple, reliably identical copies of the same data, each stored on a different server. Indeed, with the massive amounts of data now being processed, systems failure is taken as inevitable and so ways of coping with this are built into the methods of storage.”

“A distributed file system (DFS) provides effective and reliable storage for big data across many computers. […] Hadoop DFS [is] one of the most popular DFS […] When we use Hadoop DFS, the data is distributed across many nodes, often tens of thousands of them, physically situated in data centres around the world. […] The NameNode deals with all requests coming in from a client computer; it distributes storage space, and keeps track of storage availability and data location. It also manages all the basic file operations (e.g. opening and closing files) and controls data access by client computers. The DataNodes are responsible for actually storing the data and in order to do so, create, delete, and replicate blocks as necessary. Data replication is an essential feature of the Hadoop DFS. […] It is important that several copies of each block are stored so that if a DataNode fails, other nodes are able to take over and continue with processing tasks without loss of data. […] Data is written to a DataNode only once but will be read by an application many times. […] One of the functions of the NameNode is to determine the best DataNode to use given the current usage, ensuring fast data access and processing. The client computer then accesses the data block from the chosen node. DataNodes are added as and when required by the increased storage requirements, a feature known as horizontal scalability. One of the main advantages of Hadoop DFS over a relational database is that you can collect vast amounts of data, keep adding to it, and, at that time, not yet have any clear idea of what you want to use it for. […] structured data with identifiable rows and columns can be easily stored in a RDBMS while unstructured data can be stored cheaply and readily using a DFS.”

NoSQL is the generic name used to refer to non-relational databases and stands for Not only SQL. […] The non-relational model has some features that are necessary in the management of big data, namely scalability, availability, and performance. With a relational database you cannot keep scaling vertically without loss of function, whereas with NoSQL you scale horizontally and this enables performance to be maintained. […] Within the context of a distributed database system, consistency refers to the requirement that all copies of data should be the same across nodes. […] Availability requires that if a node fails, other nodes still function […] Data, and hence DataNodes, are distributed across physically separate servers and communication between these machines will sometimes fail. When this occurs it is called a network partition. Partition tolerance requires that the system continues to operate even if this happens. In essence, what the CAP [Consistency, Availability, Partition Tolerance] Theorem states is that for any distributed computer system, where the data is shared, only two of these three criteria can be met. There are therefore three possibilities; the system must be: consistent and available, consistent and partition tolerant, or partition tolerant and available. Notice that since in a RDMS the network is not partitioned, only consistency and availability would be of concern and the RDMS model meets both of these criteria. In NoSQL, since we necessarily have partitioning, we have to choose between consistency and availability. By sacrificing availability, we are able to wait until consistency is achieved. If we choose instead to sacrifice consistency it follows that sometimes the data will differ from server to server. The somewhat contrived acronym BASE (Basically Available, Soft, and Eventually consistent) is used as a convenient way of describing this situation. BASE appears to have been chosen in contrast to the ACID properties of relational databases. ‘Soft’ in this context refers to the flexibility in the consistency requirement. The aim is not to abandon any one of these criteria but to find a way of optimizing all three, essentially a compromise. […] The name NoSQL derives from the fact that SQL cannot be used to query these databases. […] There are four main types of non-relational or NoSQL database: key-value, column-based, document, and graph – all useful for storing large amounts of structured and semi-structured data. […] Currently, an approach called NewSQL is finding a niche. […] the aim of this latent technology is to solve the scalability problems associated with the relational model, making it more useable for big data.”

“A popular way of dealing with big data is to divide it up into small chunks and then process each of these individually, which is basically what MapReduce does by spreading the required calculations or queries over many, many computers. […] Bloom filters are particularly suited to applications where storage is an issue and where the data can be thought of as a list. The basic idea behind Bloom filters is that we want to build a system, based on a list of data elements, to answer the question ‘Is X in the list?’ With big datasets, searching through the entire set may be too slow to be useful, so we use a Bloom filter which, being a probabilistic method, is not 100 per cent accurate—the algorithm may decide that an element belongs to the list when actually it does not; but it is a fast, reliable, and storage efficient method of extracting useful knowledge from data. Bloom filters have many applications. For example, they can be used to check whether a particular Web address leads to a malicious website. In this case, the Bloom filter would act as a blacklist of known malicious URLs against which it is possible to check, quickly and accurately, whether it is likely that the one you have just clicked on is safe or not. Web addresses newly found to be malicious can be added to the blacklist. […] A related example is that of malicious email messages, which may be spam or may contain phishing attempts. A Bloom filter provides us with a quick way of checking each email address and hence we would be able to issue a timely warning if appropriate. […] they can [also] provide a very useful way of detecting fraudulent credit card transactions.”

Links:

Data.
Punched card.
Clickstream log.
HTTP cookie.
Australian Square Kilometre Array Pathfinder.
The Millionaire Calculator.
Data mining.
Supervised machine learning.
Unsupervised machine learning.
Statistical classification.
Cluster analysis.
Moore’s Law.
Cloud storage. Cloud computing.
Data compression. Lossless data compression. Lossy data compression.
ASCII. Huffman algorithm. Variable-length encoding.
Data compression ratio.
Grayscale.
Discrete cosine transform.
JPEG.
Bit array. Hash function.
PageRank algorithm.
Common crawl.

July 14, 2018 Posted by | Books, Computer science, Data, Statistics | Leave a comment

Quotes

i. “I only study the things I like; I apply my mind only to matters that interest me. They’ll be useful — or useless — to me or to others in due course, I’ll be given — or not given — the opportunity of benefiting from what I’ve learned. In any case, I’ll have enjoyed the inestimable advantage of doing things I like doing and following my own inclinations.” (Nicolas Chamfort)

ii. “Every day I add to the list of things I refuse to discuss. The wiser the man, the longer the list.” (-ll-)

iii. “There are more fools than wise men, and even in a wise man there is more folly than wisdom.” (-ll-)

iv. “People are always annoyed by men of letters who retreat from the world; they expect them to continue to show interest in society even though they gain little benefit from it. They would like to force them be present when lots are being drawn in a lottery for which they have no tickets.” (-ll-)

v. “Eminence without merit earns deference without esteem.” (-ll-)

vi. “Not everyone is worth listening to.” (Alain de Botton)

vii. “Innovation comes from those who see things that other don’t.” (Steve Blank)

viii. “Writing improves in direct ratio to the number of things we can keep out of it that shouldn’t be there.” (William Zinsser)

ix. “Good approximations often lead to better ones.” (George Pólya)

x. “Children have to be educated, but they have also to be left to educate themselves.” (Ernest Dimnet)

xi. “Intellectual brilliance is no guarantee against being dead wrong.” (David Fasold)

xii. “Doubt is the beginning of wisdom. It means caution, independence, honesty and veracity. […] The man who never doubts never thinks.” (George William Foote)

xiii. “The idea that all problems either have a solution or can be shown to be pseudo-problems is not one I share.” (Raymond Geuss)

xiv. “Asking what the question is, and why the question is asked, is always asking a pertinent question.” (-ll-)

xv. “In many of the cases of conceptual innovation, … creating the conceptual tools is a precondition to coming to a clear understanding of what the problem was in the first place. It is very difficult to describe the transition after it has taken place because it is difficult for us to put ourselves back into the situation of confusion, indeterminacy, and perplexity that existed before the new “tool” brought clarity and this means it is difficult for us to retain a vivid sense of what a difference having the concept made.” (-ll-)

xvi. “I’m not a mathematician, but I’ve been hanging around with some of them long enough to know how the game is played.” (Brian Hayes)

xvii. “None is so deaf as those that will not hear.” (Matthew Henry)

xviii. “People who habitually speak positively of others tend to do so in all circumstances. Those who tend to criticize others in your presence and recruit you to agree with their cutting remarks will probably criticize you when you are out of the room.” (John Hoover (consultant))

xix. “People don’t learn much about themselves or others while they’re succeeding in spite of poor practices. When the real outcomes reflect the real work being done, the real learning begins.” (-ll-)

xx. “Respect yourself, if you want others to respect you.” (Adolph Freiherr Knigge)

July 12, 2018 Posted by | Quotes/aphorisms | Leave a comment

American Naval History (II)

I have added some observations and links related to the second half of the book‘s coverage below.

“The revival of the U.S. Navy in the last two decades of the nineteenth century resulted from a variety of circumstances. The most immediate was the simple fact that the several dozen ships retained from the Civil War were getting so old that they had become antiques. […] In 1883 therefore Congress authorized the construction of three new cruisers and one dispatch vessel, its first important naval appropriation since Appomattox. […] By 1896 […] five […] new battleships had been completed and launched, and a sixth (the Iowa) joined them a year later. None of these ships had been built to meet a perceived crisis or a national emergency. Instead the United States had finally embraced the navalist argument that a mature nation-state required a naval force of the first rank. Soon enough circumstances would offer an opportunity to test both the ships and the theory. […] the United States declared war against Spain on April 25, 1898. […] Active hostilities lasted barely six months and were punctuated by two entirely one-sided naval engagements […] With the peace treaty signed in Paris in December 1898, Spain granted Cuba its independence, though the United States assumed significant authority on the island and in 1903 negotiated a lease that gave the U.S. Navy control of Guantánamo Bay on Cuba’s south coast. Spain also ceded the Philippines, Puerto Rico, Guam, and Wake Island to the United States, which paid Spain $20 million for them. Separately but simultaneously the annexation of the Kingdom of Hawaii, along with the previous annexation of Midway, gave the United States a series of Pacific Ocean stepping stones, each a potential refueling stop, that led from Hawaii to Midway, to Wake, to Guam, and to the Philippines. It made the United States not merely a continental power but a global power. […] between 1906 and 1908, no fewer than thirteen new battleships joined the fleet.”

“At root submarine warfare in the twentieth century was simply a more technologically advanced form of commerce raiding. In its objective it resembled both privateering during the American Revolution and the voyages of the CSS Alabama and other raiders during the Civil War. Yet somehow striking unarmed merchant ships from the depths, often without warning, seemed particularly heinous. Just as the use of underwater mines in the Civil War had horrified contemporaries before their use became routine, the employment of submarines against merchant shipping shocked public sentiment in the early months of World War I. […] American submarines accounted for 55 percent of all Japanese ship losses in the Pacific theater of World War II”.

“By late 1942 the first products of the Two-Ocean Navy Act of 1940 began to join the fleet. Whereas in June 1942, the United States had been hard-pressed to assemble three aircraft carriers for the Battle of Midway, a year later twenty-four new Essex-class aircraft carriers joined the fleet, each of them displacing more than 30,000 tons and carrying ninety to one hundred aircraft. Soon afterward nine more Independence-class carriers joined the fleet. […] U.S. shipyards also turned out an unprecedented number of cruisers, destroyers, and destroyer escorts, plus more than 2,700 Liberty Ships—the essential transport and cargo vessels of the war—as well as thousands of specialized landing ships essential to amphibious operations. In 1943 alone American shipyards turned out more than eight hundred of the large LSTs and LCIs, plus more than eight thousand of the smaller landing craft known as Higgins boats […] In the three weeks after D-Day, Allied landing ships and transports put more than 300,000 men, fifty thousand vehicles, and 150,000 tons of supplies ashore on Omaha Beach alone. By the first week of July the Allies had more than a million fully equipped soldiers ashore ready to break out of their enclave in Normandy and Brittany […] Having entered World War II with eleven active battleships and seven aircraft carriers, the U.S. Navy ended the war with 120 battleships and cruisers and nearly one hundred aircraft carriers (including escort carriers). Counting the smaller landing craft, the U.S. Navy listed an astonishing sixty-five thousand vessels on its register of warships and had more than four million men and women in uniform. It was more than twice as large as all the rest of the navies of the world combined. […] In the eighteen months after the end of the war, the navy processed out 3.5 million officers and enlisted personnel who returned to civilian life and their families, going back to work or attending college on the new G.I. Bill. In addition thousands of ships were scrapped or mothballed, assigned to what was designated as the National Defense Reserve Fleet and tied up in long rows at navy yards from California to Virginia. Though the navy boasted only about a thousand ships on active service by the end of 1946, that was still more than twice as many as before the war.”

“The Korean War ended in a stalemate, yet American forces, supported by troops from South Korea and other United Nations members, succeeded in repelling the first cross-border invasion by communist forces during the Cold War. That encouraged American lawmakers to continue support of a robust peacetime navy, and of military forces generally. Whereas U.S. military spending in 1950 had totaled $141 billion, for the rest of the 1950s it averaged over $350 billion per year. […] The overall architecture of American and Soviet rivalry influenced, and even defined, virtually every aspect of American foreign and defense policy in the Cold War years. Even when the issue at hand had little to do with the Soviet Union, every political and military dispute from 1949 onward was likely to be viewed through the prism of how it affected the East-West balance of power. […] For forty years the United States and the U.S. Navy had centered all of its attention on the rivalry with the Soviet Union. All planning for defense budgets, for force structure, and for the design of weapons systems grew out of assessments of the Soviet threat. The dissolution of the Soviet Union therefore compelled navy planners to revisit almost all of their assumptions. It did not erase the need for a global U.S. Navy, for even as the Soviet Union was collapsing, events in the Middle East and elsewhere provoked serial crises that led to the dispatch of U.S. naval combat groups to a variety of hot spots around the world. On the other hand, these new threats were so different from those of the Cold War era that the sophisticated weaponry the United States had developed to deter and, if necessary, defeat the Soviet Union did not necessarily meet the needs of what President George H. W. Bush called “a new world order.”

“The official roster of U.S. Navy warships in 2014 listed 283 “battle force ships” on active service. While that is fewer than at any time since World War I, those ships possess more capability and firepower than the rest of the world’s navies combined. […] For the present, […] as well as for the foreseeable future, the U.S. Navy remains supreme on the oceans of the world.”

Links:

USS Ticonderoga (1862).
Virginius Affair.
ABCD ships.
Stephen Luce. Naval War College.
USS Maine. USS Texas. USS Indiana (BB-1). USS Massachusetts (BB-2). USS Oregon (BB-3). USS Iowa (BB-4).
Benjamin Franklin Tracy.
Alfred Thayer Mahan. The Influence of Sea Power upon History: 1660–1783.
George Dewey.
William T. Sampson.
Great White Fleet.
USS Maine (BB-10). USS Missouri (BB-11). USS New Hampshire (BB-25).
HMS Dreadnought (1906)Dreadnought. Pre-dreadnought battleship.
Hay–Herrán Treaty. United States construction of the Panama canal, 1904–1914.
Bradley A. Fiske.
William S. Benson. Chief of Naval Operations.
RMS Lusitania. Unrestricted submarine warfare.
Battle of Jutland. Naval Act of 1916 (‘Big Navy Act of 1916’).
William Sims.
Sacred Twenty. WAVES.
Washington Naval Treaty. ‘Treaty cruisers‘.
Aircraft carrier. USS Lexington (CV-2). USS Saratoga (CV-3).
War Plan Orange.
Carl Vinson. Naval Act of 1938.
Lend-Lease.
Battle of the Coral Sea. Battle of Midway.
Ironbottom Sound.
Battle of the Atlantic. Wolfpack (naval tactic).
Operation Torch.
Pacific Ocean theater of World War II. Battle of Leyte Gulf.
Operation Overlord. Operation Neptune. Alan Goodrich Kirk. Bertram Ramsay.
Battle of Iwo Jima. Battle of Okinawa.
Cold War. Revolt of the Admirals.
USS Nautilus. SSBN. USS George Washington.
Ohio-class submarine.
UGM-27 PolarisUGM-73 Poseidon. UGM-96 Trident I.
Korean War. Battle of Inchon.
United States Sixth Fleet.
Cuban Missile Crisis.
Vietnam War. USS Maddox. Gulf of Tonkin Resolution. Operation Market Time. Patrol Craft FastPatrol Boat, River. Operation Game Warden.
Elmo Zumwalt. ‘Z-grams’.
USS Cole bombing.
Operation Praying Mantis.
Gulf War.
Combined Task Force 150.
United States Navy SEALs.
USS Zumwalt.

July 12, 2018 Posted by | Books, History, Wikipedia | Leave a comment

100 Cases in Orthopaedics and Rheumatology (II)

Below I have added some links related to the last half of the book’s coverage, as well as some more observations from the book.

Scaphoid fracture. Watson’s test. Dorsal intercalated segment instability. (“Non-union is not uncommon as a complication after scaphoid fractures because the blood supply to this bone is poor. Smokers have a higher incidence of non-union. Occasionally, the blood supply is poor enough to lead to avascular necrosis. If non-union is not detected, subsequent arthritis in the wrist can develop.”)
Septic arthritis. (“Septic arthritis is an orthopaedic emergency. […] People with septic arthritis are typically unwell with fevers and malaise and the joint pain is severe. […] Any acutely hot or painful joint is septic arthritis until proven otherwise.”)
Rheumatoid arthritis. (“[RA is] the most common of the inflammatory arthropathies. […] early-morning stiffness and pain, combined with soft-tissue rather than bony swelling, are classic patterns for inflammatory disease. Although […] RA affects principally the small joints of the hands (and feet), it may progress to involve any synovial joint and may be complicated by extra-articular features […] family history [of the disease] is not unusual due to the presence of susceptibility genes such as HLA-DR. […] Not all patients with RA have rheumatoid factor (RF), and not all patients with RF have RA; ACPA has greater specificity for RA than rheumatoid factor. […] Medical therapy focuses on disease-modifying anti-rheumatic drugs (DMARDs) such as methotrexate, sulphasalazine, leflunomide and hydroxychloroquine which may be used individually or in combination. […] Disease activity in RA is measured by the disease activity score (DAS), which is a composite score of the clinical evidence of synovitis, the current inflammatory response and the patient’s own assessment of their health. […] Patients who have high disease activity as determined by the DAS and have either failed or failed to tolerate standard disease modifying therapy qualify for biologic therapy – monoclonal antibodies that are directed against key components of the inflammatory response. […] TNF-α blockade is highly effective in up to 70 per cent of patients, reducing both inflammation and the progressive structural damage associated with severe active disease.”)
Ankylosing spondylitis. Ankylosis. Schober’s index. Costochondritis.
Mononeuritis multiplex. (“Mononeuritis multiplex arises due to interruption of the vasa nervorum, the blood supply to peripheral nerves […] Mononeuritis multiplex is commonly caused by diabetes or vasculitis. […] Vasculitis – inflammation of blood vessels and subsequent obstruction to blood flow – can be primary (idiopathic) or secondary, in which case it is associated with an underlying condition such as rheumatoid arthritis. The vasculitides are classified according to the size of the vessel involved. […] Management of mononeuritis multiplex is based on potent immunosuppression […] and the treatment of underlying infections such as hepatitis.”)
Multiple myeloma. Bence-Jones protein. (“The combination of bone pain and elevated ESR and calcium is suggestive of multiple myeloma.”)
Osteoporosis. DEXA scan. T-score. (“Postmenopausal bone loss is the most common cause of osteoporosis, but secondary osteoporosis may occur in the context of a number of medical conditions […] Steroid-induced osteoporosis is a significant problem in medical practice. […] All patients receiving corticosteroids should have bone protection […] Pharmacological treatment in the form of calcium supplementation and biphosphonates to reduce osteoclast activity is effective but compliance is typically poor.”)
Osteomalacia. Rickets. Craniotabes.
Paget’s disease (see also this post). (“In practical terms, the main indication to treat Paget’s disease is pain […] although bone deformity or compression syndromes (or risk thereof) would also prompt therapy. The treatment of choice is a biphosphonate to diminish osteoclast activity”).
Stress fracture. Female athlete triad. (“Stress fractures are overuse injuries and occur when periosteal resorption exceeds bone formation. They are commonly seen in two main patient groups: soldiers may suffer so-called march fractures in the metatarsals, while athletes may develop them in different sites according to their sporting activity. Although the knee is a common site in runners due to excess mechanical loading, stress fractures may also result in non-weight-bearing sites due to repetitive and excessive traction […]. The classic symptom […] is of pain that occurs throughout running and crucially persists with rest; this is in contrast to shin splints, a traction injury to the tibial periosteum in which the pain diminishes somewhat with continued activity […] The crucial feature of rehabilitation is a graded return to sport to prevent progression or recurrence.”)
Psoriatic arthritis. (“Arthropathy and rash is a common combination in rheumatology […] Psoriatic arthritis is a common inflammatory arthropathy that affects up to 15 per cent of those with psoriasis. […] Nail disease is very helpful in differentiating psoriatic arthritis from other forms of inflammatory arthropathy.”)
Ehlers–Danlos syndromes. Marfan syndrome. Beighton (hypermobility) score.
Carpal tunnel syndrome. (“Carpal tunnel syndrome is the most common entrapment neuropathy […] The classic symptoms are of tingling in the sensory distribution of the median nerve (i.e. the lateral three and a half digits); loss of thumb abduction is a late feature. Symptoms are often worse at night (when the hand might be quite painful) and in certain postures […] The majority of cases are idiopathic, but pregnancy and rheumatoid arthritis are very common precipitating causes […] The majority of patients will respond well to conservative management […] If these measures fail, corticosteroid injection into the carpal tunnel can be very effective in up to 80 per cent of patients. Surgical decompression should be reserved for those with persistent disabling symptoms or motor loss.”)
Mixed connective tissue disease.
Crystal arthropathy. Tophus. Uric acid nephropathyChondrocalcinosis. (“In any patient presenting with an acutely painful and swollen joint, the most important diagnoses to consider are septic arthritis and crystal arthropathy. Crystal arthropathy such as gout is more common than septic arthritis […] Gout may be precipitated by diuretics, renal impairment and aspirin use”).
Familial Mediterranean fever. Amyloidosis.
Systemic lupus erythematosus (see also this). Jaccoud arthropathy. Lupus nephritis. (“Renal disease is the most feared complication of SLE.”)
Scleroderma. Raynaud’s phenomenon. (“Scleroderma is an uncommon disorder characterized by thickening of the skin and, to a greater or lesser degree, fibrosis of internal organs.”)
Henoch-Schönlein purpura. Cryoglobulinemia. (“Purpura are the result of a spontaneous extravasation of blood from the capillaries into the skin. If small they are known as petechiae, when they are large they are termed ecchymoses. There is an extensive differential diagnosis for purpura […] The combination of palpable purpura (distributed particularly over the buttocks and extensor surfaces of legs), abdominal pain, arthritis and renal disease is a classic presentation of Henoch–Schönlein purpura (HSP). HSP is a distinct and frequently self-limiting small-vessel vasculitis that can affect any age; but the majority of cases present in children aged 2–10 years, in whom the prognosis is more benign than the adult form, often remitting entirely within 3–4 months. The abdominal pain may mimic a surgical abdomen and can presage intussusception, haemorrhage or perforation. The arthritis, in contrast, is relatively mild and tends to affect the knees and ankles.”)
Rheumatic fever.
Erythema nodosum. (“Mild idiopathic erythema nodosum […] needs no specific treatment”).
Rheumatoid lung disease. Bronchiolitis obliterans. Methotrexate-induced pneumonitis. Hamman–Rich syndrome.
Antiphospholipid syndrome. Sapporo criteria. (“Antiphospholipid syndrome is a hypercoagulable state characterized by recurrent arteriovenous thrombosis and/or pregnancy morbidity in the presence of either a lupus anticoagulant or anticardiolipin antibody (both phospholipid-related proteins). […] The most common arteriovenous thrombotic events in antiphospholipid syndrome are deep venous thrombosis and pulmonary embolus […], but any part of the circulation may be involved, with arterial events such as myocardial infarction and stroke carrying a high mortality rate. Poor placental circulation is thought to be responsible for the high pregnancy morbidity, with recurrent first- and second-trimester loss and a higher rate of pre-eclampsia being typical clinical features.”)
Still’s disease. (“Consider inflammatory disease in cases of pyrexia of unknown origin.”)
Polymyalgia rheumatica. Giant cell arteritis. (“[P]olymyalgia rheumatica (PMR) [is] a systemic inflammatory syndrome affecting the elderly that is characterized by bilateral pain and stiffness in the shoulders and hip girdles. The stiffness can be profound and limits mobility although true muscle weakness is not a feature. […] The affected areas are diffusely tender, with movements limited by pain. […] care must be taken not to attribute joint inflammation to PMR until other diagnoses have been excluded; for example, a significant minority of RA patients may present with a polymyalgic onset. […] The treatment for PMR is low-dose corticosteroids. […] Many physicians would consider a dramatic response to low-dose prednisolone as almost diagnostic for PMR, so if a patients symptoms do not improve rapidly it is wise to re-evaluate the original diagnosis.”)
Relapsing polychondritis. (“Relapsing polychondritis is characterized histologically by inflammatory infiltration and later fibrosis of cartilage. Any cartilage, in any location, is at risk. […] Treatment of relapsing polychondritis is with corticosteroids […] Surgical reconstruction of collapsed structures is not an option as the deformity tends to continue postoperatively.”)
Dermatomyositis. Gottron’s Papules.
Enteropathic arthritis. (“A seronegative arthritis may develop in up to 15 per cent of patients with any form of inflammatory bowel disease, including ulcerative colitis (UC), Crohn’s disease or microscopic and collagenous colitis. The most common clinical presentations are a peripheral arthritis […] and spondyloarthritis.”)
Reflex sympathetic dystrophy.
Whipple’s disease. (“Although rare, consider Whipple’s disease in any patient presenting with malabsorption, weight loss and arthritis.”)
Wegener’s granulomatosis. (“Small-vessel vasculitis may cause a pulmonary-renal syndrome. […] The classic triad of Weneger’s granulomatosis is the presence of upper and lower respiratory tract disease and renal impairment.”)
Reactive arthritis. Reiter’s syndrome. (“Consider reactive arthritis in any patient presenting with a monoarthropathy. […] Reactive arthritis is generally benign, with up to 80 per cent making a full recovery.”)
Sarcoidosis. Löfgren syndrome.
Polyarteritis nodosa. (“Consider mesenteric ischaemia in any patient presenting with a systemic illness and postprandial abdominal pain.”)
Sjögren syndrome. Schirmer’s test.
Behçet syndrome.
Lyme disease. Erythema chronicum migrans. (“The combination of rash leading to arthralgia and cranial neuropathy is a classic presentation of Lyme disease.”)
Takayasu arteritis. (“Takayasu’s arteritis is an occlusive vasculitis leading to stenoses of the aorta and its principal branches. The symptoms and signs of the disease depend on the distribution of the affected vessel but upper limbs are generally affected more commonly than the iliac tributaries. […] the disease is a chronic relapsing and remitting condition […] The mainstay of treatment is high-dose corticosteroids plus a steroid-sparing agent such as methotrexate. […] Cyclophosphamide is reserved for those patients who do not achieve remission with standard therapy. Surgical intervention such as bypass or angioplasty may improve ischaemic symptoms once the inflammation is under control.”)
Lymphoma.
Haemarthrosis. (“Consider synovial tumours in a patient with unexplained haemarthrosis.”)
Juvenile idiopathic arthritis.
Drug-induced lupus erythematosus. (“Drug-induced lupus (DIL) generates a different spectrum of clinical manifestations from idiopathic disease. DIL is less severe than idiopathic SLE, and nephritis or central nervous system involvement is very rare. […] The most common drugs responsible for a lupus-like syndrome are procainamide, hydralazine, quinidine, isoniazid, methyldopa, chlorpromazine and minocycline. […] Treatment involves stopping the offending medication and the symptoms will gradually resolve.”)
Churg–Strauss syndrome.

July 8, 2018 Posted by | Books, Cancer/oncology, Cardiology, Gastroenterology, Immunology, Medicine, Nephrology, Neurology, Ophthalmology, Pharmacology | Leave a comment

American Naval History (I?)

This book was okay, but nothing special. Some of the topics covered in the book, those related to naval warfare during the Age of Sail, are topics about which I’ve literally read thousands of pages in the last year alone (I’ve so far read the first 14 books in Patrick O’Brian’s Aubrey-Maturin series, all of which take place during the Napoleonic Wars, and which taken together amounts to ~5000+ pages) – so of course it’s easy for me to spot some of the topics not covered, or not covered in the amount of detail they might have been; I have previously mentioned – and it bears repetition – that despite the fictional setting of the books there is really quite a lot of ‘real history’ in O’Brian’s books, and if you want to know about naval warfare during the period in which these books take place, I highly doubt anything remotely comparable to O’Brian’s works exist. On the other hand this book also covers topics about which I previously would have quite frankly admitted to being more or less completely ignorant, such as naval warfare during the American War of Independence or naval warfare during the American Civil War.

I have deliberately limited my history reading in recent years, and two main reasons I had for deciding to read this one anyway was that a) I figured I needed a relatively ‘light’ non-fiction book (…neither of the two non-fiction books I’m currently reading can incidentally in any way be described as light, but they’re ‘heavy’ in different ways), and b) I knew from experience that wikipedia tends to have a lot of great articles about naval stuff, so I figured even if the book might not be all that great I’d still be able to wiki-binge in featured articles if I felt like it because you’d expect a book like this one to include a lot of names of ships and people and events that might be well covered there, even if they might not be well covered in the book.

Below I’ve added some links related to the books coverage, as well as a few quotes from the book.

“From the start a few Americans dreamed of creating a standing navy constructed on the British model. Their ambition was prompted less by a conviction that such a force might actually be able to contend with the mighty Royal Navy than from a belief that an American navy would confer legitimacy on American nationhood. The first hesitant steps toward the fulfillment of this vision can be traced back to October 13, 1775, when the Continental Congress in Philadelphia agreed to purchase two armed merchantmen to attack British supply ships, the first congressional appropriation of any kind for a maritime force. […] October 13 remains the official birth date of the U.S. Navy. Two months later Congress took a more tangible step toward creating a navy by authorizing the construction of thirteen frigates, and a year later, in November 1776, Congress approved the construction of three ships of the line. This latter decision was stunningly ambitious. Ships of the line consumed prodigious amounts of seasoned timber and scores of heavy iron cannon and required a crew of between six hundred and eight hundred men. […] the subsequent history of these ships provided the skeptics of a standing navy with powerful evidence of the perils of overreach. […] unanticipated delays and unforeseen expenses. […] their record as warships was dismal […] The sad record of these thirteen frigates was so dispiriting that one of the champions of a standing American navy, John Adams, wrote to a friend that when he contemplated the history of the Continental Navy, it was hard for him to avoid tears. […] Washington’s navy were not part of a long-range plan to establish a permanent naval force. Rather they were an ad hoc response to particular circumstances, employed for a specific task in the full expectation that upon its completion they would revert to their former status as fishing schooners and merchant vessels. In that respect Washington’s navy is a useful metaphor for the role of American naval forces in the Revolutionary War and indeed throughout much of the early history of the Republic.”

“Continental Navy ships seized merchant ships whenever they could, but the most effective commerce raiders during the Revolutionary War were scores of privately owned vessels known as privateers. Though often called pirates in the British press, privateers held government-issued letters of marque, which were quite literally licenses to steal. […] Obtaining a letter of marque was relatively easy. Though records are incomplete, somewhere between 1,700 and 2,000 American ship owners applied to Congress for one, though only about eight hundred American privateers actually got to sea. […] Before the war was over, American privateers had captured an estimated six hundred British merchant ships […] The disappointing performance of the Continental Navy and the success of commerce raiding led many Americans of the revolutionary generation to conclude that the job of defending American interests at sea could be done at no cost by hundreds of privateers. Many saw privateers as the militia of the sea: available in time of need yet requiring no public funds to sustain them in peacetime. […] With independence secured, the American militia returned to their farms, and privateersmen once again became merchant seamen. The few Continental Navy warships that had survived the conflict were sold off or given away; the last of them, the frigate Alliance, was auctioned off in 1785 and became a merchant ship on the China trade. Of the three ships of the line authorized nearly seven years earlier, only one had been completed before the war ended, and it never saw active service. Seeing no practical use for such a vessel in peacetime, Congress voted to give her to France […] In effect the American navy simply ceased to exist.”

“The interminable Anglo-French conflict, which had worked decisively to America’s advantage during the Revolution, proved troublesome after 1793, when British diplomats convinced Portugal to join an anti-French coalition. In order to have the means to do so, Portugal signed a peace treaty with the city-state of Algiers on the north coast of Africa and ended its regular patrols of the Straits of Gibraltar. Almost at once raiding ships from Algiers passed out into the Atlantic, where they began to attack American shipping. The attacks provoked earnest discussion in Philadelphia about how to respond. It was evident that unleashing American privateers against the Algerines would have no effect at all, for the Barbary states had scant merchant trade for them to seize. What was needed was a national naval force that could both protect American commerce and punish those who attacked it. Appreciation of that reality led to a bill in Congress to authorize “a naval force, adequate to the protection of the commerce of the United States against the Algerine corsairs.” Once again the idea was not to create a permanent naval establishment but to produce a temporary force to meet an immediate need. The specific proposal was for the construction of six large frigates, a decision that essentially founded the U.S. Navy, though only a few of those who supported the bill conceived of it in those terms. Most saw it as a short-term solution to an immediate problem […] There were delays and unforeseen expenses in the construction process, and none of the ships had been completed when news arrived that American negotiators had concluded a treaty of peace with Algiers. Under its terms the United States would present Algiers with a thirty-six-gun frigate and pay $642,500, plus an additional annual payment of $21,600 in naval stores. In exchange Algiers would pledge not to attack American vessels. To modern eyes such terms are offensive — no better than simple extortion. But in 1795 paying extortion was the standard protocol for Western powers in dealing with the North African city-states.”

“Compared to ships of the line, or even to frigates, gunboats were tiny; most were only sixty to eighty feet long and had only a single mast and often only a single gun, generally a 24- or 32-pounder. They were also inexpensive; at roughly $5,000 each, more than two dozen of them could be had for the price of a single frigate. They were also strictly defensive weapons and therefore unlikely to provoke a confrontation with Britain. They appealed to the advocates of a militia-based naval force because when they were not in active service, they could be laid up in large sheds or barns. […] During Jefferson’s second term (1805-9) the United States built more than a hundred of these gunboats, boasting a total of 172 of them by the late summer of 1809. […] By building a gunboat navy, Jefferson provided a veneer of defense for the coast without sailing into the dangerous waters of the Anglo-French conflict. […] The [later] disappointing performance of the gunboats [during the War of 1812], especially when compared to the success of the frigates on the high seas, discredited the idea of relying on them for the nation’s maritime defense.”

“The kinds of tasks assigned to the U.S. Navy after 1820 were simply inappropriate for […] huge – and expensive to operate – warships. The day-to-day duties of the U.S. Navy involved dealing with smugglers, pirates, and the illegal slave trade, and deployment of ships of the line to deal with such issues was like hitting a tack with a sledgehammer. […] Pirates had always been a concern in the West Indies, but their numbers increased dramatically during the 1820s […]. Beginning in 1810 several of Spain’s unhappy colonies in Central and South America initiated efforts to win their independence via wars of liberation. These revolutionary governments were generous in passing out letters of marque to prey on Spanish trade. Operating mostly in tiny single-masted cutters and schooners—even the occasional rowboat—these privateers found slim pickings in targeting Spanish vessels, and they soon began to seize any merchant ship they could catch. By 1820 most of them had metamorphosed from licensed privateering into open piracy, and in 1822 the U.S. Navy established the West Indies Squadron to deal with them. […] Pirates were a problem in other parts of the world too. One trouble spot was in the Far East, especially in the much-traveled Straits of Malacca between Malaya and Sumatra.”

“Congress had declared the importation of slaves from Africa illegal after January 1, 1808, though there was no serious effort to interdict that human traffic until 1821, when the Navy established an African Squadron. Almost at once, however, its mission became controversial. […] After only two years Congress withdrew its support, and the African Squadron ceased to exist. After that only the Royal Navy made any serious effort to suppress the slave trade. The owners of the illicit slave ships saw an opportunity in these circumstances. Aware of how sensitive the Americans were about interference with their ships, slavers of every nationality — or no nationality at all — began flying the Stars and Stripes in order to deter inspection by the British. When the British saw through this ruse and stopped the ships anyway, the United States objected on principle. This Anglo-American dispute was resolved in the Webster-Ashburton Treaty of 1842 […] By its terms the British pledged to stop searching vessels flying the American flag, and the Americans pledged to police those vessels themselves”.

“Until the 1840s a young man became an officer in the U.S. Navy by being appointed a midshipman as a teenager and learning on the job while at sea. When he felt ready, he took an exam, which, if passed, made him a passed midshipman eligible for appointment to lieutenant when a vacancy occurred. With the emergence of steam engines as well as larger and more complex ordnance, aspiring officers had to master more technical and theoretical subjects. It was partly because of this that the U.S. Naval Academy was established […] in 1845 […] Another change during the 1850s was the abolition of flogging […] Given the rough character of the enlisted force, physical punishment was the standard penalty for a wide variety of major and minor infractions, and ship captains could prescribe anywhere from a dozen to a hundred lashes depending on the seriousness of the offense. For most such punishments all hands were called to bear witness in the belief that this offered a profound deterrent to future misconduct. It was unquestionably barbarous, but also effective, and it had been a part of naval life for more than a century. Nevertheless in September 1850 Congress declared it illegal. […] A decade later, in the midst of the Civil War, the U.S. Navy abolished another long-standing tradition, this one much beloved by the enlisted sailors. This was the daily grog ration: a half pint of rum or whisky, cut with water, that was issued to every sailor on board, even teenagers, once a day. Though the tradition was common to all navies and predated American independence, the United States was the first nation to abolish it, on September 1, 1862.”

“For more than two centuries naval warships had changed little. Wooden-hull ships propelled by sails carried muzzle-loaded iron gun tubes that fired solid shot. By 1850, however, that was changing, and changing swiftly. […] Over the ensuing decade steam ships became more ubiquitous as they became more efficient. Naval guns became much larger […] and the projectiles they fired were no longer merely solid iron balls but explosive shells. All of this occurred just in time to have a dramatic influence on the navies that fought in the American Civil War. […] by the 1850s [US] lawmakers recognized that the nation’s wooden sailing navy, much of it left over from the War of 1812, was growing increasingly obsolete, and as a result Congress passed a number of bills to modernize the navy. […] though the U.S. Navy remained small by European standards, when the Civil War began, more than half of the forty-two ships on active service were of the newest and most efficient type. By contrast, the Confederate States began the Civil War with no navy at all, and the South embraced the traditional policies of the weaker naval power: harbor defense and commerce raiding. […] Over the next […] years both sides built more ironclad warships.”

“[T]he Union could, and did, simply outbuild the Confederacy. Before the war was over, the Union produced more than sixty monitor-type ironclads, each class of them larger and more powerfully armed than the one before. […] by the spring of 1865, when Lee surrendered his army to Grant, the navy had grown to sixteen times its prewar size and boasted some of the most advanced warships in the world. […] When the Civil War ended, the U.S. Navy boasted a total of 671 warships, all but a few of them steamers, many of them ironclads, and some that were the most advanced of their type. Yet within a decade all but a few dozen had been sold off, scrapped, or placed in ordinary—mothballed for a future crisis. Conforming to the now familiar pattern, after a dramatic expansion to meet a crisis, the navy swiftly contracted at almost the moment the crisis ended. By 1870 the U.S. Navy had only fifty-two ships on active service. […] The advent of iron-armored warships during the Civil War fell short of being a full-scale technological revolution. Ever thicker armor led to ever larger naval guns, until it became evident that to make a ship invulnerable would render her virtually immobile. Armor continued to be used in warship construction after the war, but it was applied selectively, to protect engine spaces or magazines. […] While it did not affect the outcome of the war, Confederate commerce raiding did inflict a disproportionate amount of damage on Union shipping for a relatively small investment. Altogether Confederate commerce raiders captured and destroyed some 284 U.S. merchant ships.”

Links:

USS Hannah. USS Lee. HMS Thunderer (1760). USS Warren (1776). USS Hancock (1776). Governor Trumbull (1777 ship). HMS Drake (1777). HMS Serapis (1779). USS Chesapeake (1799).
William Howe, 5th Viscount Howe. Richard Howe, 1st Earl Howe. Benedict Arnold. John Paul Jones. Esek Hopkins. Richard Pearson. François Joseph Paul de Grasse. Charles Cornwallis, 1st Marquess Cornwallis. Thomas Graves, 1st Baron Graves. Richard Dale. Yusuf Karamanli. Richard Valentine Morris. Edward Preble. Stephen Decatur. James Barron.
Ship of the line.
Frigate.
Two-decker.
PrivateerLetter of marque. Commerce raiding.
Battle of Valcour Island. Battles of Saratoga. Battle of the Chesapeake.
Peace of Paris (1783).
Jay’s Treaty.
XYZ Affair. Quasi-War. Treaty of Mortefontaine.
First Barbary War.
Battle of Trafalgar. Battle of Austerlitz.
An Act for the relief of sick and disabled seamen.
Warhawks. War of 1812. Treaty of Ghent.
Board of Navy Commissioners.
USS Potomac (1822).
James Biddle.
Stephen Cassin.
Cornelius Stribling.
Missouri Compromise.
Matthew Fontaine Maury.
United States Exploring Expedition.
Matthew C. Perry. Bakumatsu. Convention of Kanagawa.
Adams–Onís Treaty.
Era of Good Feelings.
Mexican–American War.
USS Princeton (1843).
Anaconda Plan. Union blockade.
H. L. Hunley (submarine).
CSS Alabama. CSS Shenandoah.

July 7, 2018 Posted by | Books, History | Leave a comment

Words

The words included in this post are words which I encountered while reading the books: 100 cases in orthopaedics and rheumatology, Managing Gastrointestinal Complications of Diabetes, American Naval History: A very short introduction, Big Data: A very short introduction, Faust among Equals, Pocket Oncology, My Hero, and Odds and Gods.

Angulation. Soleus. Mucoid. Plantarflex. Pronation. Arthrosis. Syndesmosis. Ecchymosis. Diastasis. Epicondyle. Pucker. Enthesopathy. Paresis. Polyostotic. Riff. Livedo. Aphtha/aphthous. Pathergy. Annular. Synovium/synovial.

Scallop. Tastant. Incantatory. Radeau. Gundalow. Scrivener. Pebbledash. Chrominance. Tittle. Capitonym. Scot. Grayling. Terylene. Pied-à-terre. Solenoid. Fen. Anaglypta. Loud-hailer. Fauteuil. Dimpsy.

Seborrhea. Anasarca. Emetogenic. Trachelectomy. Brachytherapy. Nomogram. Trusty. Biff. Pantechnicon. Porpentine. Budgerigar. Nerk. Glade. Slinky. Gelignite. Boater. Seamless. Jabberwocky. Fardel. Kapok.

Aspidistra. Cowpat. Countershaft. Tinny. Ponce. Warp. Weft. Recension. Bandstand. Strimmer. Chasuble. Champer. Bourn. Khazi. Zimmer. Ossuary. Suppliant. Nock. Taramosalata. Quoit.

July 6, 2018 Posted by | Books, Language | Leave a comment

A few diabetes papers of interest

i. Clinical Inertia in Type 2 Diabetes Management: Evidence From a Large, Real-World Data Set.

Despite clinical practice guidelines that recommend frequent monitoring of HbA1c (every 3 months) and aggressive escalation of antihyperglycemic therapies until glycemic targets are reached (1,2), the intensification of therapy in patients with uncontrolled type 2 diabetes (T2D) is often inappropriately delayed. The failure of clinicians to intensify therapy when clinically indicated has been termed “clinical inertia.” A recently published systematic review found that the median time to treatment intensification after an HbA1c measurement above target was longer than 1 year (range 0.3 to >7.2 years) (3). We have previously reported a rather high rate of clinical inertia in patients uncontrolled on metformin monotherapy (4). Treatment was not intensified early (within 6 months of metformin monotherapy failure) in 38%, 31%, and 28% of patients when poor glycemic control was defined as an HbA1c >7% (>53 mmol/mol), >7.5% (>58 mmol/mol), and >8% (>64 mmol/mol), respectively.

Using the electronic health record system at Cleveland Clinic (2005–2016), we identified a cohort of 7,389 patients with T2D who had an HbA1c value ≥7% (≥53 mmol/mol) (“index HbA1c”) despite having been on a stable regimen of two oral antihyperglycemic drugs (OADs) for at least 6 months prior to the index HbA1c. This HbA1c threshold would generally be expected to trigger treatment intensification based on current guidelines. Patient records were reviewed for the 6-month period following the index HbA1c, and changes in diabetes therapy were evaluated for evidence of “intensification” […] almost two-thirds of patients had no evidence of intensification in their antihyperglycemic therapy during the 6 months following the index HbA1c ≥7% (≥53 mmol/mol), suggestive of poor glycemic control. Most alarming was the finding that even among patients in the highest index HbA1c category (≥9% [≥75 mmol/mol]), therapy was not intensified in 44% of patients, and slightly more than half (53%) of those with an HbA1c between 8 and 8.9% (64 and 74 mmol/mol) did not have their therapy intensified.”

“Unfortunately, these real-world findings confirm a high prevalence of clinical inertia with regard to T2D management. The unavoidable conclusion from these data […] is that physicians are not responding quickly enough to evidence of poor glycemic control in a high percentage of patients, even in those with HbA1c levels far exceeding typical treatment targets.

ii. Gestational Diabetes Mellitus and Diet: A Systematic Review and Meta-analysis of Randomized Controlled Trials Examining the Impact of Modified Dietary Interventions on Maternal Glucose Control and Neonatal Birth Weight.

“Medical nutrition therapy is a mainstay of gestational diabetes mellitus (GDM) treatment. However, data are limited regarding the optimal diet for achieving euglycemia and improved perinatal outcomes. This study aims to investigate whether modified dietary interventions are associated with improved glycemia and/or improved birth weight outcomes in women with GDM when compared with control dietary interventions. […]

From 2,269 records screened, 18 randomized controlled trials involving 1,151 women were included. Pooled analysis demonstrated that for modified dietary interventions when compared with control subjects, there was a larger decrease in fasting and postprandial glucose (−4.07 mg/dL [95% CI −7.58, −0.57]; P = 0.02 and −7.78 mg/dL [95% CI −12.27, −3.29]; P = 0.0007, respectively) and a lower need for medication treatment (relative risk 0.65 [95% CI 0.47, 0.88]; P = 0.006). For neonatal outcomes, analysis of 16 randomized controlled trials including 841 participants showed that modified dietary interventions were associated with lower infant birth weight (−170.62 g [95% CI −333.64, −7.60]; P = 0.04) and less macrosomia (relative risk 0.49 [95% CI 0.27, 0.88]; P = 0.02). The quality of evidence for these outcomes was low to very low. Baseline differences between groups in postprandial glucose may have influenced glucose-related outcomes. […] we were unable to resolve queries regarding potential concerns for sources of bias because of lack of author response to our queries. We have addressed this by excluding these studies in the sensitivity analysis. […] after removal of the studies with the most substantial methodological concerns in the sensitivity analysis, differences in the change in fasting plasma glucose were no longer significant. Although differences in the change in postprandial glucose and birth weight persisted, they were attenuated.”

“This review highlights limitations of the current literature examining dietary interventions in GDM. Most studies are too small to demonstrate significant differences in our primary outcomes. Seven studies had fewer than 50 participants and only two had more than 100 participants (n = 125 and 150). The short duration of many dietary interventions and the late gestational age at which they were started (38) may also have limited their impact on glycemic and birth weight outcomes. Furthermore, we cannot conclude if the improvements in maternal glycemia and infant birth weight are due to reduced energy intake, improved nutrient quality, or specific changes in types of carbohydrate and/or protein. […] These data suggest that dietary interventions modified above and beyond usual dietary advice for GDM have the potential to offer better maternal glycemic control and infant birth weight outcomes. However, the quality of evidence was judged as low to very low due to the limitations in the design of included studies, the inconsistency between their results, and the imprecision in their effect estimates.”

iii. Lifetime Prevalence and Prognosis of Prediabetes Without Progression to Diabetes.

Impaired fasting glucose, also termed prediabetes, is increasingly prevalent and is associated with adverse cardiovascular risk (1). The cardiovascular risks attributed to prediabetes may be driven primarily by the conversion from prediabetes to overt diabetes (2). Given limited data on outcomes among nonconverters in the community, the extent to which some individuals with prediabetes never go on to develop diabetes and yet still experience adverse cardiovascular risk remains unclear. We therefore investigated the frequency of cardiovascular versus noncardiovascular deaths in people who developed early- and late-onset prediabetes without ever progressing to diabetes.”

“We used data from the Framingham Heart Study collected on the Offspring Cohort participants aged 18–77 years at the time of initial fasting plasma glucose (FPG) assessment (1983–1987) who had serial FPG testing over subsequent examinations with continuous surveillance for outcomes including cause-specific mortality (3). As applied in prior epidemiological investigations (4), we used a case-control design focusing on the cause-specific outcome of cardiovascular death to minimize the competing risk issues that would be encountered in time-to-event analyses. To focus on outcomes associated with a given chronic glycemic state maintained over the entire lifetime, we restricted our analyses to only those participants for whom data were available over the life course and until death. […] We excluded individuals with unknown age of onset of glycemic impairment (i.e., age ≥50 years with prediabetes or diabetes at enrollment). […] We analyzed cause-specific mortality, allowing for relating time-varying exposures with lifetime risk for an event (4). We related glycemic phenotypes to cardiovascular versus noncardiovascular cause of death using a case-control design, where cases were defined as individuals who died of cardiovascular disease (death from stroke, heart failure, or other vascular event) or coronary heart disease (CHD) and controls were those who died of other causes.”

“The mean age of participants at enrollment was 42 ± 7 years (43% women). The mean age at death was 73 ± 10 years. […] In our study, approximately half of the individuals presented with glycemic impairment in their lifetime, of whom two-thirds developed prediabetes but never diabetes. In our study, these individuals had lower cardiovascular-related mortality compared with those who later developed diabetes, even if the prediabetes onset was early in life. However, individuals with early-onset prediabetes, despite lifelong avoidance of overt diabetes, had greater propensity for death due to cardiovascular or coronary versus noncardiovascular disease compared with those who maintained lifelong normal glucose status. […] Prediabetes is a heterogeneous entity. Whereas some forms of prediabetes are precursors to diabetes, other types of prediabetes never progress to diabetes but still confer increased propensity for death from a cardiovascular cause.”

iv. Learning From Past Failures of Oral Insulin Trials.

Very recently one of the largest type 1 diabetes prevention trials using daily administration of oral insulin or placebo was completed. After 9 years of study enrollment and follow-up, the randomized controlled trial failed to delay the onset of clinical type 1 diabetes, which was the primary end point. The unfortunate outcome follows the previous large-scale trial, the Diabetes Prevention Trial–Type 1 (DPT-1), which again failed to delay diabetes onset with oral insulin or low-dose subcutaneous insulin injections in a randomized controlled trial with relatives at risk for type 1 diabetes. These sobering results raise the important question, “Where does the type 1 diabetes prevention field move next?” In this Perspective, we advocate for a paradigm shift in which smaller mechanistic trials are conducted to define immune mechanisms and potentially identify treatment responders. […] Mechanistic trials will allow for better trial design and patient selection based upon molecular markers prior to large randomized controlled trials, moving toward a personalized medicine approach for the prevention of type 1 diabetes.

“Before a disease can be prevented, it must be predicted. The ability to assess risk for developing type 1 diabetes (T1D) has been well documented over the last two decades (1). Using genetic markers, human leukocyte antigen (HLA) DQ and DR typing (2), islet autoantibodies (1), and assessments of glucose tolerance (intravenous or oral glucose tolerance tests) has led to accurate prediction models for T1D development (3). Prospective birth cohort studies Diabetes Autoimmunity Study in the Young (DAISY) in Colorado (4), Type 1 Diabetes Prediction and Prevention (DIPP) study in Finland (5), and BABYDIAB studies in Germany have followed genetically at-risk children for the development of islet autoimmunity and T1D disease onset (6). These studies have been instrumental in understanding the natural history of T1D and making T1D a predictable disease with the measurement of antibodies in the peripheral blood directed against insulin and proteins within β-cells […]. Having two or more islet autoantibodies confers an ∼85% risk of developing T1D within 15 years and nearly 100% over time (7). […] T1D can be predicted by measuring islet autoantibodies, and thousands of individuals including young children are being identified through screening efforts, necessitating the need for treatments to delay and prevent disease onset.”

“Antigen-specific immunotherapies hold the promise of potentially inducing tolerance by inhibiting effector T cells and inducing regulatory T cells, which can act locally at tissue-specific sites of inflammation (12). Additionally, side effects are minimal with these therapies. As such, insulin and GAD have both been used as antigen-based approaches in T1D (13). Oral insulin has been evaluated in two large randomized double-blinded placebo-controlled trials over the last two decades. First in the Diabetes Prevention Trial–Type 1 (DPT-1) and then in the TrialNet clinical trials network […] The DPT-1 enrolled relatives at increased risk for T1D having islet autoantibodies […] After 6 years of treatment, there was no delay in T1D onset. […] The TrialNet study screened, enrolled, and followed 560 at-risk relatives over 9 years from 2007 to 2016, and results have been recently published (16). Unfortunately, this trial failed to meet the primary end point of delaying or preventing diabetes onset.”

“Many factors influence the potency and efficacy of antigen-specific therapy such as dose, frequency of dosing, route of administration, and, importantly, timing in the disease process. […] Over the last two decades, most T1D clinical trial designs have randomized participants 1:1 or 2:1, drug to placebo, in a double-blind two-arm design, especially those intervention trials in new-onset T1D (18). Primary end points have been delay in T1D onset for prevention trials or stimulated C-peptide area under the curve at 12 months with new-onset trials. These designs have served the field well and provided reliable human data for efficacy. However, there are limitations including the speed at which these trials can be completed, the number of interventions evaluated, dose optimization, and evaluation of mechanistic hypotheses. Alternative clinical trial designs, such as adaptive trial designs using Bayesian statistics, can overcome some of these issues. Adaptive designs use accumulating data from the trial to modify certain aspects of the study, such as enrollment and treatment group assignments. This “learn as we go” approach relies on biomarkers to drive decisions on planned trial modifications. […] One of the significant limitations for adaptive trial designs in the T1D field, at the present time, is the lack of validated biomarkers for short-term readouts to inform trial adaptations. However, large-scale collaborative efforts are ongoing to define biomarkers of T1D-specific immune dysfunction and β-cell stress and death (9,22).”

T1D prevention has proven much more difficult than originally thought, challenging the paradigm that T1D is a single disease. T1D is indeed a heterogeneous disease in terms of age of diagnosis, islet autoantibody profiles, and the rate of loss of residual β-cell function after clinical onset. Children have a much more rapid loss of residual insulin production (measured as C-peptide area under the curve following a mixed-meal tolerance test) after diagnosis than older adolescents and adults (23,24), indicating that childhood and adult-onset T1D are not identical. Further evidence for subtypes of T1D come from studies of human pancreata of T1D organ donors in which children (0–14 years of age) within 1 year of diagnosis had many more inflamed islets compared with older adolescents and adults aged 15–39 years old (25). Additionally, a younger age of T1D onset (<7 years) has been associated with higher numbers of CD20+ B cells within islets and fewer insulin-containing islets compared with an age of onset ≥13 years associated with fewer CD20+ islet infiltrating cells and more insulin-containing islets (26,27). This suggests a much more aggressive autoimmune process in younger children and distinct endotypes (a subtype of a condition defined by a distinct pathophysiologic mechanism), which has recently been proposed for T1D (27).”

“Safe and specific therapies capable of being used in children are needed for T1D prevention. The vast majority of drug development involves small biotechnology companies, specialty pharmaceutical firms, and large pharmaceutical companies, more so than traditional academia. A large amount of preclinical and clinical research (phase 1, 2, and 3 studies) are needed to advance a drug candidate through the development pipeline to achieve U.S. Food and Drug Administration (FDA) approval for a given disease. A recent analysis of over 4,000 drugs from 835 companies in development during 2003–2011 revealed that only 10.4% of drugs that enter clinical development at phase 1 (safety studies) advance to FDA approval (32). However, the success rate increases 50% for the lead indication of a drug, i.e., a drug specifically developed for one given disease (32). Reasons for this include strong scientific rationale and early efficacy signals such as correlating pharmacokinetic (drug levels) to pharmacodynamic (drug target effects) tests for the lead indication. Lead indications also tend to have smaller, better-defined “homogenous” patient populations than nonlead indications for the same drug. This would imply that the T1D field needs more companies developing drugs specifically for T1D, not type 2 diabetes or other autoimmune diseases with later testing to broaden a drug’s indication. […] In a similar but separate analysis, selection biomarkers were found to substantially increase the success rate of drug approvals across all phases of drug development. Using a selection biomarker as part of study inclusion criteria increased drug approval threefold from 8.4% to 25.9% when used in phase 1 trials, 28% to 46% when transitioning from a phase 2 to phase 3 efficacy trial, and 55% to 76% for a phase 3 trial to likelihood of approval (33). These striking data support the concept that enrichment of patient enrollment at the molecular level is a more successful strategy than heterogeneous enrollment in clinical intervention trials. […] Taken together, new drugs designed specifically for children at risk for T1D and a biomarker selecting patients for a treatment response may increase the likelihood for a successful prevention trial; however, experimental confirmation in clinical trials is needed.”

v. Metabolic Karma — The Atherogenic Legacy of Diabetes: The 2017 Edwin Bierman Award Lecture.

“Cardiovascular (CV) disease remains the major cause of mortality and is associated with significant morbidity in both type 1 and type 2 diabetes (14). Despite major improvements in the management of traditional risk factors, including hypertension, dyslipidemia, and glycemic control prevention, retardation and reversal of atherosclerosis, as manifested clinically by myocardial infarction, stroke, and peripheral vascular disease, remain a major unmet need in the population with diabetes. For example, in the Steno-2 study and in its most recent report of the follow-up phase, at least a decade after cessation of the active treatment phase, there remained a high risk of death, primarily from CV disease despite aggressive control of the traditional risk factors, in this originally microalbuminuric population with type 2 diabetes (5,6). In a meta-analysis of major CV trials where aggressive glucose lowering was instituted […] the beneficial effect of intensive glycemic control on CV disease was modest, at best (7). […] recent trials with two sodium–glucose cotransporter 2 inhibitors, empagliflozin and canagliflozin (11,12), and two long-acting glucagon-like peptide 1 agonists, liraglutide and semaglutide (13,14), have reported CV benefits that have led in some of these trials to a decrease in CV and all-cause mortality. However, even with these recent positive CV outcomes, CV disease remains the major burden in the population with diabetes (15).”

“This unmet need of residual CV disease in the population with diabetes remains unexplained but may occur as a result of a range of nontraditional risk factors, including low-grade inflammation and enhanced thrombogenicity as a result of the diabetic milieu (16). Furthermore, a range of injurious pathways as a result of chronic hyperglycemia previously studied in vitro in endothelial cells (17) or in models of microvascular complications may also be relevant and are a focus of this review […] [One] major factor that is likely to promote atherosclerosis in the diabetes setting is increased oxidative stress. There is not only increased generation of ROS from diverse sources but also reduced antioxidant defense in diabetes (40). […] vascular ROS accumulation is closely linked to atherosclerosis and vascular inflammation provide the impetus to consider specific antioxidant strategies as a novel therapeutic approach to decrease CV disease, particularly in the setting of diabetes.”

“One of the most important findings from numerous trials performed in subjects with type 1 and type 2 diabetes has been the identification that previous episodes of hyperglycemia can have a long-standing impact on the subsequent development of CV disease. This phenomenon known as “metabolic memory” or the “legacy effect” has been reported in numerous trials […] The underlying explanation at a molecular and/or cellular level for this phenomenon remains to be determined. Our group, as well as others, has postulated that epigenetic mechanisms may participate in conferring metabolic memory (5153). In in vitro studies initially performed in aortic endothelial cells, transient incubation of these cells in high glucose followed by subsequent return of these cells to a normoglycemic environment was associated with increased gene expression of the p65 subunit of NF-κB, NF-κB activation, and expression of NF-κB–dependent proteins, including MCP-1 and VCAM-1 (54).

In further defining a potential epigenetic mechanism that could explain the glucose-induced upregulation of genes implicated in vascular inflammation, a specific histone methylation mark was identified in the promoter region of the p65 gene (54). This histone 3 lysine 4 monomethylation (H3K4m1) occurred as a result of mobilization of the histone methyl transferase, Set7. Furthermore, knockdown of Set7 attenuated glucose-induced p65 upregulation and prevented the persistent upregulation of this gene despite these endothelial cells returning to a normoglycemic milieu (55). These findings, confirmed in animal models exposed to transient hyperglycemia (54), provide the rationale to consider Set7 as an appropriate target for end-organ protective therapies in diabetes. Although specific Set7 inhibitors are currently unavailable for clinical development, the current interest in drugs that block various enzymes, such as Set7, that influence histone methylation in diseases, such as cancer (56), could lead to agents that warrant testing in diabetes. Studies addressing other sites of histone methylation as well as other epigenetic pathways including DNA methylation and acetylation have been reported or are currently in progress (55,57,58), particularly in the context of diabetes complications. […] As in vitro and preclinical studies increase our knowledge and understanding of the pathogenesis of diabetes complications, it is likely that we will identify new molecular targets leading to better treatments to reduce the burden of macrovascular disease. Nevertheless, these new treatments will need to be considered in the context of improved management of traditional risk factors.”

vi. Perceived risk of diabetes seriously underestimates actual diabetes risk: The KORA FF4 study.

“According to the International Diabetes Federation (IDF), almost half of the people with diabetes worldwide are unaware of having the disease, and even in high-income countries, about one in three diabetes cases is not diagnosed [1,2]. In the USA, 28% of diabetes cases are undiagnosed [3]. In DEGS1, a recent population-based German survey, 22% of persons with HbA1c ≥ 6.5% were unaware of their disease [4]. Persons with undiagnosed diabetes mellitus (UDM) have a more than twofold risk of mortality compared to persons with normal glucose tolerance (NGT) [5,6]; many of them also have undiagnosed diabetes complications like retinopathy and chronic kidney disease [7,8]. […] early detection of diabetes and prediabetes is beneficial for patients, but may be delayed by patients´ being overly optimistic about their own health. Therefore, it is important to address how persons with UDM or prediabetes perceive their diabetes risk.”

“The proportion of persons who perceived their risk of having UDM at the time of the interview as “negligible”, “very low” or “low” was 87.1% (95% CI: 85.0–89.0) in NGT [normal glucose tolerance individuals], 83.9% (81.2–86.4) in prediabetes, and 74.2% (64.5–82.0) in UDM […]. The proportion of persons who perceived themselves at risk of developing diabetes in the following years ranged from 14.6% (95% CI: 12.6–16.8) in NGT to 20.6% (17.9–23.6) in prediabetes to 28.7% (20.5–38.6) in UDM […] In univariate regression models, perceiving oneself at risk of developing diabetes was associated with younger age, female sex, higher school education, obesity, self-rated poor general health, and parental diabetes […] the proportion of better educated younger persons (age ≤ 60 years) with prediabetes, who perceived themselves at risk of developing diabetes was 35%, whereas this figure was only 13% in less well educated older persons (age > 60 years).”

The present study shows that three out of four persons with UDM [undiagnosed diabetes mellitus] believed that the probability of having undetected diabetes was low or very low. In persons with prediabetes, more than 70% believed that they were not at risk of developing diabetes in the next years. People with prediabetes were more inclined to perceive themselves at risk of diabetes if their self-rated general health was poor, their mother or father had diabetes, they were obese, they were female, their educational level was high, and if they were younger. […] People with undiagnosed diabetes or prediabetes considerably underestimate their probability of having or developing diabetes. […] perceived diabetes risk was lower in men, lower educated and older persons. […] Our results showed that people with low and intermediate education strongly underestimate their risk of diabetes and may qualify as target groups for detection of UDM and prediabetes.”

“The present results were in line with results from the Dutch Hoorn Study [18,19]. Adriaanse et al. reported that among persons with UDM, only 28.3% perceived their likeliness of having diabetes to be at least 10% [18], and among persons with high risk of diabetes (predicted from a symptom risk questionnaire), the median perceived likeliness of having diabetes was 10.8% [19]. Again, perceived risk did not fully reflect the actual risk profiles. For BMI, there was barely any association with perceived risk of diabetes in the Dutch study [19].”

July 2, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Immunology, Medicine, Molecular biology, Pharmacology, Studies | Leave a comment

100 Cases in Orthopaedics and Rheumatology (I)

This book was decent, but it’s not as good as some of the books I’ve previously read in this series; in some of the books in the series the average length of the answer section is 2-3 pages, which is a format I quite like, whereas in this book the average is more like 1-2 pages – which is a bit too short in my opinion.

Below I have added some links related to the first half of the book’s coverage and a few observations from the book.

Acute haematogenous osteomyelitis. (“There are two principal types of acute osteomyelitis: •haematogenous osteomyelitis •direct or contiguous inoculation osteomyelitis. Acute haematogenous osteomyelitis is characterized by an acute infection of the bone caused by the seeding of the bacteria within the bone from a remote source. This condition occurs primarily in children. […] In general, osteomyelitis has a bimodal age distribution. Acute haematogenous osteomyelitis is primarily a disease in children. Direct trauma and contiguous focus osteomyelitis are more common among adults and adolescents than in children. Spinal osteomyelitis is more common in individuals older than 45 years.”)
Haemophilic arthropathy. (“Haemophilic arthropathy is a condition associated with clotting disorder leading to recurrent bleeding in the joints. Over time this can lead to joint destruction.”)
Avascular necrosis of the femoral head. Trendelenburg’s sign. Gaucher’s disease. Legg–Calvé–Perthes disease. Ficat and Arlet classification of avascular necrosis of femoral head.
Osteosarcoma. Codman triangle. Enneking Classification. (“A firm, irregular mass fixed to underlying structures is more suspicious of a malignant lesion.”)
Ewing’s sarcomaHaversian canal. (“This condition [ES] typically occurs in young patients and presents with pain and fever. [It] is the second most common primary malignant bone tumour (the first being osteosarcoma). The tumour is more common in males and affects children and young adults. The majority develop this between the ages of 10 and 20 years. […] The earliest symptom is pain, which is initially intermittent but becomes intense. Rarely, a patient may present with a pathological fracture. Eighty-five per cent of patients have chromosomal translocations associated with the 11/22 chromosome. Ewing’s sarcoma is potentially the most aggressive form of the primary bone tumours. […] Patients are usually assigned to one of two groups, the tumour being classified as either localized or metastatic disease. Tumours in the pelvis typically present late and are therefore larger with a poorer prognosis. Treatment comprises chemotherapy, surgical resection and/or radiotherapy. […] With localized disease, wide surgical excision of the tumour is preferred over radiotherapy if the involved bone is expendable (e.g. fibular, rib), or if radiotherapy would damage the growth plate. […] Non-metastatic disease survival rates are 55–70 per cent, compared to 22–33 per cent for metastatic disease. Patients require careful follow-up owing to the risk of developing osteosarcoma following radiotherapy, particularly in children in whom it can occur in up to 20 per cent of cases.”
Clavicle Fracture. Floating Shoulder.
Proximal humerus fractures.
Lateral condyle fracture of the humerus. Salter-Harris fracture. (“Humeral condyle fractures occur most commonly between 6 and 10 years of age. […] fractures often appear subtle on radiographs. […] Operative management is essential for all displaced fractures“).
Distal radius fracture. (“Colles’ fractures account for over 90 per cent of distal radius fractures. Any injury to the median nerve can produce paraesthesia in the thumb, index finger, and middle and radial border of the ring finger […]. There is a bimodal age distribution of fractures to the distal radius with two peaks occurring. The first peak occurs in people aged 18–25 years, and a second peak in older people (>65 years). High-energy injuries are more common in the younger group and low-energy injuries in the older group. Osteoporosis may play a role in the occurrence of this later fracture. In the group of patients between 60 and 69 years, women far outnumber men. […] Assessment with plain radiographs is all that is needed for most fractures. […] The majority of distal radius fractures can be treated conservatively.”)
Gamekeeper’s thumb. Stener lesion.
Subtrochanteric Hip Fracture.
Supracondylar Femur Fractures. (“There is a bimodal distribution of fractures based on age and gender. Most high-energy distal femur fractures occur in males aged between 15 and 50 years, while most low-energy fractures occur in osteoporotic women aged 50 or above. The most common high-energy mechanism of injury is a road traffic accident (RTA), and the most common low-energy mechanism is a fall. […] In general, […] non-operative treatment does not work well for displaced fractures. […] Operative intervention is also indicated in the presence of open fractures and injuries associated with vascular injury. […] Total knee replacement is effective in elderly patients with articular fractures and significant osteoporosis, or pre-existing arthritis that is not amenable to open reduction and internal fixation. Low-demand elderly patients with non- or minimally displaced fractures can be managed conservatively. […] In general, this fracture can take a minimum of 3-4 months to unite.”)
Supracondylar humerus fracture. Gartland Classification of Supracondylar Humerus Fractures. (“Prior to the treatment of supracondylar fractures, it is essential to identify the type. Examination of the degree of swelling and deformity as well as a neurological and vascular status assessment of the forearm is essential. A vascular injury may present with signs of an acute compartment syndrome with pain, paraesthesia, pallor, and pulseless and tight forearm. Injury to the brachial artery may present with loss of the distal pulse. However, in the presence of a weak distal pulse, major vessel injury may still be present owing to the collateral circulation. […] Vascular insult can lead to Volkmann ischaemic contracture of the forearm. […] Malunion of the fracture may lead to cubitus varus deformity.”)
Femoral Shaft Fractures.
Femoral Neck Fractures. Garden’s classification. (“Hip fractures are the most common reason for admission to an orthopaedic ward, usually caused by a fall by an elderly person. The average age of a person with a hip fracture is 77 years. Mortality is high: about 10 per cent of people with a hip fracture die within 1 month, and about one-third within 12 months. However, fewer than half of deaths are attributable to the fracture, reflecting the high prevalence of comorbidity. The mental status of the patient is also important: senility is associated with a three-fold increased risk of sepsis and dislocation of prosthetic replacement when compared with mentally alert patients. The one-year mortality rate in these patients is considerable, being reported as high as 50 per cent.”)
Tibia Shaft Fractures. (“The tibia is the most frequent site of a long-bone fracture in the body. […] Open fractures are surgical emergencies […] Most closed tibial fractures can be treated conservatively using plaster of Paris.”)
Tibial plateau fracture. Schatzker classification.
Compartment syndrome. (“This condition is an orthopaedic emergency and can be limb- and life-threatening. Compartment syndrome occurs when perfusion pressure falls below tissue pressure in a closed fascial compartment and results in microvascular compromise. At this point, blood flow through the capillaries stops. In the absence of flow, oxygen delivery stops. Hypoxic injury causes cells to release vasoactive substances (e.g. histamine, serotonin), which increase endothelial permeability. Capillaries allow continued fluid loss, which increases tissue pressure and advances injury. Nerve conduction slows, tissue pH falls due to anaerobic metabolism, surrounding tissue suffers further damage, and muscle tissue suffers necrosis, releasing myoglobin. In untreated cases the syndrome can lead to permanent functional impairment, renal failure secondary to rhabdomyolysis, and death. Patients at risk of compartment syndrome include those with high-velocity injuries, long-bone fractures, high-energy trauma, penetrating injuries such as gunshot wounds and stabbing, and crush injuries, as well as patients on anticoagulants with trauma. The patient usually complains of severe pain that is out of proportion to the injury. An assessment of the affected limb may reveal swelling which feels tense, or hard compartments. Pain on passive range of movement of fingers or toes of the affected limb is a typical feature. Late signs comprise pallor, paralysis, paraesthesia and a pulseless limb. Sensory nerves begin to lose conductive ability, followed by motor nerves. […] Fasciotomy is the definitive treatment for compartment syndrome. The purpose of fasciotomy is to achieve prompt and adequate decompression so as to restore the tissue perfusion.”)
Talus fracture. Hawkins sign. Avascular necrosis.
Calcaneal fracture. (“The most common situation leading to calcaneal fracture is a young adult who falls from a height and lands on his or her feet. […] Patients often sustain occult injuries to their lumbar or cervical spine, and the proximal femur. A thorough clinical and radiological investigation of the spine area is mandatory in patients with calcaneal fracture.”)
Idiopathic scoliosis. Adam’s forward bend test. Romberg test. Cobb angle.
Cauda equina syndrome. (“[Cauda equina syndrome] is an orthopaedic emergency. The condition is characterized by the red-flag signs comprising low back pain, unilateral or bilateral sciatica, saddle anaesthesia with sacral sparing, and bladder and bowel dysfunctions. Urinary retention is the most consistent finding. […] Urgent spinal orthopaedic or neurosurgical consulation is essential, with transfer to a unit capable of undertaking any definitive surgery considered necessary. In the long term, residual weakness, incontinence, impotence and/or sensory abnormalities are potential problems if therapy is delayed. […] The prognosis improves if a definitive cause is identified and appropriate surgical spinal decompression occurs early. Late surgical compression produces varying results and is often associated with a poorer outcome.”)
Developmental dysplasia of the hip.
OsteoarthritisArthroplasty. OsteotomyArthrodesis. (“Early-morning stiffness that gradually diminishes with activity is typical of osteoarthritis. […] Patients with hip pathology can sometimes present with knee pain without any groin or thigh symptoms. […] Osteoarthritis most commonly affects middle-aged and elderly patients. Any synovial joint can develop osteoarthritis. This condition can lead to degeneration of articular cartilage and is often associated with stiffness.”)
Prepatellar bursitis.
Baker’s cyst.
Meniscus tear. McMurray test. Apley’s test. Lachman test.
Anterior cruciate ligament injury.
Achilles tendon rupture. Thompson Test.
Congenital Talipes EquinovarusPonseti method. Pirani score. (“Club foot is bilateral in about 50 per cent of cases and occurs in approximately 1 in 800 births.”)
Charcot–Marie–Tooth disease. Pes cavus. Claw toe deformity. Pes planus.
Hallux valgus. Hallux Rigidus.
Mallet toe deformity. Condylectomy. Syme amputation. (“Mallet toes are common in diabetics with peripheral neuropathy. […] Pain and/or a callosity is often the presenting complaint. This may also lead to nail deformity on the toe. Most commonly the deformity is present in the second toe. […] Footwear modification […] should be tried first […] Surgical management of mallet toe is indicated if the deformity becomes painful.”)
Hammer Toe.
Lisfranc injury. Fleck sign. (“The Lisfranc joint, which represents the articulation between the midfoot and forefoot, is composed of the five TMT [tarsometatarsal] joints. […] A Lisfranc injury encompasses everything from a sprain to a complete disruption of normal anatomy through the TMT joints. […] Lisfranc injuries are commonly undiagnosed and carry a high risk of chronic secondary disability.”)
Charcot joint. (“Charcot arthropathy results in progressive destruction of bone and soft tissues at weight-bearing joints. In its most severe form it may cause significant disruption of the bony architecture, including joint dislocations and fractures. Charcot arthropathy can occur at any joint but most commonly affects the lower regions: the foot and ankle. Bilateral disease occurs in fewer than 10 per cent of patients. Any condition that leads to a sensory or autonomic neuropathy can cause a Charcot joint. Charcot arthropathy can occur as a complication of diabetes, syphilis, alcoholism, leprosy, meningomyleocele, spinal cord injury, syringomyelia, renal dialysis and congenital insensitivity to pain. In the majority of cases, non-operative methods are preferred. The principles of management are to provide immobilization of the affected joint and reduce any areas of stress on the skin. Immobilization is usually accomplished by casting.”)
Lateral epicondylitis (tennis elbow). (“For work-related lateral epicondylitis, a systematic review identified three risk factors: handling tools heavier than 1 kg, handling loads heavier than 20 kg at least ten times per day, and repetitive movements for more than two hours per day. […] Up to 95 per cent of patients with tennis elbow respond to conservative measures.”)
Medial Epicondylitis.
De Quervain’s tenosynovitis. Finkelstein test. Intersection syndrome. Wartenberg’s syndrome.
Trigger finger.
Adhesive capsulitis (‘frozen shoulder’). (“Frozen shoulder typically has three phases: the painful phase, the stiffening phase and the thawing phase. During the initial phase there is a gradual onset of diffuse shoulder pain lasting from weeks to months. The stiffening phase is characterized by a progressive loss of motion that may last up to a year. The majority of patients lose glenohumeral external rotation, internal rotation and abduction during this phase. The final, thawing phase ranges from weeks to months and constitutes a period of gradual motion improvement. Once in this phase, the patient may require up to 9 months to regain a fully functional range of motion. There is a higher incidence of frozen shoulder in patients with diabetes compared with the general population. The incidence among patients with insulin-dependent diabetes is even higher, with an increased frequency of bilateral frozen shoulder. Adhesive capsulitis has also been reported in patients with hyperthyroidism, ischaemic heart disease, and cervical spondylosis. Non-steroidal anti-inflammatory drugs (NSAIDs) are recommended in the initial treatment phase. […] A subgroup of patients with frozen shoulder syndrome often fail to improve despite conservative measures. In these cases, interventions such as manipulation, distension arthrography or open surgical release may be beneficial.” [A while back I covered some papers on adhesive capsulitis and diabetes here (part iii) – US].
Dupuytren’s Disease. (“Dupuytren’s contracture is a benign, slowly progressive fibroproliferative disease of the palmar fascia. […] The disease presents most commonly in the ring and little fingers and is bilateral in 45 per cent of cases. […] Dupuytren’s disease is more common in males and people of northern European origin. It can be associated with prior hand trauma, alcoholic cirrhosis, epilepsy (due to medications such as phenytoin), and diabetes. [“Dupuytren’s disease […] may be observed in up to 42% of adults with diabetes mellitus, typically in patients with long-standing T1D” – I usually don’t like such unspecific reported prevalences (what does ‘up to’ really mean?), but the point is that this is not a 1 in a 100 complication among diabetics; it seems to be a relatively common complication in type 1 DM – US] The prevalence increases with age. Mild cases may not need any treatment. Surgery is indicated in progressive contractures and established deformity […] Recurrence or extension of the disease after operation is not uncommon”).

July 1, 2018 Posted by | Books, Cancer/oncology, Diabetes, Medicine, Neurology | Leave a comment

Frontiers in Statistical Quality Control (I)

The XIth International Workshop on Intelligent Statistical Quality Control took place in Sydney, Australia from August 20 to August 23, 2013. […] The 23 papers in this volume were carefully selected by the scientific program committee, reviewed by its members, revised by the authors and, finally, adapted by the editors for this volume. The focus of the book lies on three major areas of statistical quality control: statistical process control (SPC), acceptance sampling and design of experiments. The majority of the papers deal with statistical process control while acceptance sampling, and design of experiments are treated to a lesser extent.”

I’m currently reading this book. It’s quite technical and a bit longer than many of the other non-fiction books I’ve read this year (…but shorter than others; however it is still ~400 pages of content exclusively devoted to statistical papers), so it may take me a while to finish it. I figured the fact that I may not finish the book in a while was not a good argument against blogging relevant sections of the book now, especially as it’s already been some time since I read the first few chapters.

When reading a book like this one I care a lot more about understanding the concepts than about understanding the proofs, so as usual the amount of math included in the post is limited; please don’t assume it’s because there are no equations in the book.

Below I have added some ideas and observations from the first 100 pages or so of the book’s coverage.

“A growing number of [statistical quality control] applications involve monitoring with rare event data. […] The most common approaches for monitoring such processes involve using an exponential distribution to model the time between the events or using a Bernoulli distribution to model whether or not each opportunity for the event results in its occurrence. The use of a sequence of independent Bernoulli random variables leads to a geometric distribution for the number of non-occurrences between the occurrences of the rare events. One surveillance method is to use a power transformation on the exponential or geometric observations to achieve approximate normality of the in control distribution and then use a standard individuals control chart. We add to the argument that use of this approach is very counterproductive and cover some alternative approaches. We discuss the choice of appropriate performance metrics. […] Most often the focus is on detecting process deterioration, i.e., an increase in the probability of the adverse event or a decrease in the average time between events. Szarka and Woodall (2011) reviewed the extensive number of methods that have been proposed for monitoring processes using Bernoulli data. Generally, it is difficult to better the performance of the Bernoulli cumulative sum (CUSUM) chart of Reynolds and Stoumbos (1999). The Bernoulli and geometric CUSUM charts can be designed to be equivalent […] Levinson (2011) argued that control charts should not be used with healthcare rare event data because in many situations there is an assignable cause for each error, e.g., each hospital-acquired infection or serious prescription error, and each incident should be investigated. We agree that serious adverse events should be investigated whether or not they result in a control chart signal. The investigation of rare adverse events, however, and the implementation of process improvements to prevent future such errors, does not preclude using a control chart to determine if the rate of such events has increased or decreased over time. In fact, a control chart can be used to evaluate the success of any process improvement initiative.”

“The choice of appropriate performance metrics for comparing surveillance schemes for monitoring Bernoulli and exponential data is quite important. The usual Average Run Length (ARL) metric refers to the average number of points plotted on the chart until a signal is given. This metric is most clearly appropriate when the time between the plotted points is constant. […] In some cases, such as in monitoring the number of near-miss accidents, it may be informative to use a metric that reflects the actual time required to obtain an out-of-control signal. Thus one can consider the number of Bernoulli trials until an out-of-control signal is given for Bernoulli data, leading to its average, the ANOS. The ANOS will be proportional to the average time before a signal if the rate at which the Bernoulli trials are observed is constant over time. For exponentially distributed data one could consider the average time to signal, the ATS. If the process is stable, then ANOS = ARL / p and ATS = ARS * θ, where p and θ are the Bernoulli probability and the exponential mean, respectively. […] To assess out-of-control performance we believe it is most realistic to consider steady-state performance where the shift in the parameter occurs at some time after monitoring has begun. […] Under this scenario one cannot easily convert the ARL metric to the ANOS and ATS metrics. Consideration of steady state performance of competing methods is important because some methods have an implicit headstart feature that results in good zero-state performance, but poor steady-state performance.”

“Data aggregation is frequently done when monitoring rare events and for count data generally. For example, one might monitor the number of accidents per month in a plant or the number of patient falls per week in a hospital. […] Schuh et al. (2013) showed […] that there can be significantly long expected delays in detecting process deterioration when data are aggregated over time even when there are few samples with zero events. One can always aggregate data over long enough time periods to avoid zero counts, but the consequence is slower detection of increases in the rate of the adverse event. […] aggregating event data over fixed time intervals, as frequently done in practice, can result in significant delays in detecting increases in the rate of adverse events. […] Another type of aggregation is to wait until one has observed a given number of events before updating a control chart based on a proportion or waiting time. […] This type of aggregation […] does not appear to delay the detection of process changes nearly as much as aggregating data over fixed time periods. […] We believe that the adverse effect of aggregating data over time has not been fully appreciated in practice and more research work is needed on this topic. Only a couple of the most basic scenarios for count data have been studied. […] Virtually all of the work on monitoring the rate of rare events is based on the assumption that there is a sustained shift in the rate. In some applications the rate change may be transient. In this scenario other performance metrics would be needed, such as the probability of detecting the process shift during the transient period. The effect of data aggregation over time might be larger if shifts in the parameter are not sustained.”

Big data is a popular term that is used to describe the large, diverse, complex and/or longitudinal datasets generated from a variety of instruments, sensors and/or computer-based transactions. […] The acquisition of data does not automatically transfer to new knowledge about the system under study. […] To be able to gain knowledge from big data, it is imperative to understand both the scale and scope of big data. The challenges with processing and analyzing big data are not only limited to the size of the data. These challenges include the size, or volume, as well as the variety and velocity of the data (Zikopoulos et al. 2012). Known as the 3V’s, the volume, variety, and/or velocity of the data are the three main characteristics that distinguish big data from the data we have had in the past. […] Many have suggested that there are more V’s that are important to the big data problem such as veracity and value (IEEE BigData 2013). Veracity refers to the trustworthiness of the data, and value refers to the value that the data adds to creating knowledge about a topic or situation. While we agree that these are important data characteristics, we do not see these as key features that distinguish big data from regular data. It is important to evaluate the veracity and value of all data, both big and small. Both veracity and value are related to the concept of data quality, an important research area in the Information Systems (IS) literature for more than 50 years. The research literature discussing the aspects and measures of data quality is extensive in the IS field, but seems to have reached a general agreement that the multiple aspects of data quality can be grouped into several broad categories […]. Two of the categories relevant here are contextual and intrinsic dimensions of data quality. Contextual aspects of data quality are context specific measures that are subjective in nature, including concepts like value-added, believability, and relevance. […] Intrinsic aspects of data quality are more concrete in nature, and include four main dimensions: accuracy, timeliness, consistency, and completeness […] From our perspective, many of the contextual and intrinsic aspects of data quality are related to the veracity and value of the data. That said, big data presents new challenges in conceptualizing, evaluating, and monitoring data quality.”

The application of SPC methods to big data is similar in many ways to the application of SPC methods to regular data. However, many of the challenges inherent to properly studying and framing a problem can be more difficult in the presence of massive amounts of data. […] it is important to note that building the model is not the end-game. The actual use of the analysis in practice is the goal. Thus, some consideration needs to be given to the actual implementation of the statistical surveillance applications. This brings us to another important challenge, that of the complexity of many big data applications. SPC applications have a tradition of back of the napkin methods. The custom within SPC practice is the use of simple methods that are easy to explain like the Shewhart control chart. These are often the best methods to use to gain credibility because they are easy to understand and easy to explain to a non-statistical audience. However, big data often does not lend itself to easy-to-compute or easy-to-explain methods. While a control chart based on a neural net may work well, it may be so difficult to understand and explain that it may be abandoned for inferior, yet simpler methods. Thus, it is important to consider the dissemination and deployment of advanced analytical methods in order for them to be effectively used in practice. […] Another challenge in monitoring high dimensional data sets is the fact that not all of the monitored variables are likely to shift at the same time; thus, some method is necessary to identify the process variables that have changed. In high dimensional data sets, the decomposition methods used with multivariate control charts can become very computationally expensive. Several authors have considered variable selection methods combined with control charts to quickly detect process changes in a variety of practical scenarios including fault detection, multistage processes, and profile monitoring. […] All of these methods based on variable selection techniques are based on the idea of monitoring subsets of potentially faulty variables. […] Some variable reduction methods are needed to better identify shifts. We believe that further work in the areas combining variable selection methods and surveillance are important for quickly and efficiently diagnosing changes in high-dimensional data.

“A multiple stream process (MSP) is a process that generates several streams of output. From the statistical process control standpoint, the quality variable and its specifications are the same in all streams. A classical example is a filling process such as the ones found in beverage, cosmetics, pharmaceutical and chemical industries, where a filler machine may have many heads. […] Although multiple-stream processes are found very frequently in industry, the literature on schemes for the statistical control of such kind of processes is far from abundant. This paper presents a survey of the research on this topic. […] The first specific techniques for the statistical control of MSPs are the group control charts (GCCs) […] Clearly the chief motivation for these charts was to avoid the proliferation of control charts that would arise if every stream were controlled with a separate pair of charts (one for location and other for spread). Assuming the in-control distribution of the quality variable to be the same in all streams (an assumption which is sometimes too restrictive), the control limits should be the same for every stream. So, the basic idea is to build only one chart (or a pair of charts) with the information from all streams.”

“The GCC will work well if the values of the quality variable in the different streams are independent and identically distributed, that is, if there is no cross-correlation between streams. However, such an assumption is often unrealistic. In many real multiple-stream processes, the value of the observed quality variable is typically better described as the sum of two components: a common component (let’s refer to it as “mean level”), exhibiting variation that affects all streams in the same way, and the individual component of each stream, which corresponds to the difference between the stream observation and the common mean level. […] [T]he presence of the mean level component leads to reduced sensitivity of Boyd’s GCC to shifts in the individual component of a stream if the variance […] of the mean level is large with respect to the variance […] of the individual stream components. Moreover, the GCC is a Shewhart-type chart; if the data exhibit autocorrelation, the traditional form of estimating the process standard deviation (for establishing the control limits) based on the average range or average standard deviation of individual samples (even with the Bonferroni or Dunn-Sidak correction) will result in too frequent false alarms, due to the underestimation of the process total variance. […] [I]in the converse situation […] the GCC will have little sensitivity to causes that affect all streams — at least, less sensitivity than would have a chart on the average of the measurements across all streams, since this one would have tighter limits than the GCC. […] Therefore, to monitor MSPs with the two components described, Mortell and Runger (1995) proposed using two control charts: First, a chart for the grand average between streams, to monitor the mean level. […] For monitoring the individual stream components, they proposed using a special range chart (Rt chart), whose statistic is the range between streams, that is, the difference between the largest stream average and the smallest stream average […] the authors commented that both the chart on the average of all streams and the Rt chart can be used even when at each sampling time only a subset of the streams are sampled (provided that the number of streams sampled remains constant). The subset can be varied periodically or even chosen at random. […] it is common in practice to measure only a subset of streams at each sampling time, especially when the number of streams is large. […] Although almost the totality of Mortell and Runger’s paper is about the monitoring of the individual streams, the importance of the chart on the average of all streams for monitoring the mean level of the process cannot be overemphasized.”

“Epprecht and Barros (2013) studied a filling process application where the stream variances were similar, but the stream means differed, wandered, changed from day to day, were very difficult to adjust, and the production runs were too short to enable good estimation of the parameters of the individual streams. The solution adopted to control the process was to adjust the target above the nominal level to compensate for the variation between streams, as a function of the lower specification limit, of the desired false-alarm rate and of a point (shift, power) arbitrarily selected. This would be a MSP version of “acceptance control charts” (Montgomery 2012, Sect. 10.2) if taking samples with more than one observation per stream [is] feasible.”

Most research works consider a small to moderate number of streams. Some processes may have hundreds of streams, and in this case the issue of how to control the false-alarm rate while keeping enough detection power […] becomes a real problem. […] Real multiple-stream processes can be very ill-behaved. The author of this paper has seen a plant with six 20-stream filling processes in which the stream levels had different means and variances and could not be adjusted separately (one single pump and 20 hoses). For many real cases with particular twists like this one, it happens that no previous solution in the literature is applicable. […] The appropriateness and efficiency of [different monitoring methods] depends on the dynamic behaviour of the process over time, on the degree of cross-correlation between streams, on the ratio between the variabilities of the individual streams and of the common component (note that these three factors are interrelated), on the type and size of shifts that are likely and/or relevant to detect, on the ease or difficulty to adjust all streams in the same target, on the process capability, on the number of streams, on the feasibility of taking samples of more than one observation per stream at each sampling time (or even the feasibility of taking one observation of every stream at each sampling time!), on the length of the production runs, and so on. So, the first problem in a practical application is to characterize the process and select the appropriate monitoring scheme (or to adapt one, or to develop a new one). This analysis may not be trivial for the average practitioner in industry. […] Jirasettapong and Rojanarowan (2011) is the only work I have found on the issue of selecting the most suitable monitoring scheme for an MSP. It considers only a limited number of alternative schemes and a few aspects of the problem. More comprehensive analyses are needed.”

June 27, 2018 Posted by | Books, Data, Engineering, Statistics | Leave a comment

Oceans (II)

In this post I have added some more observations from the book and some more links related to the book‘s coverage.

“Almost all the surface waves we observe are generated by wind stress, acting either locally or far out to sea. Although the wave crests appear to move forwards with the wind, this does not occur. Mechanical energy, created by the original disturbance that caused the wave, travels through the ocean at the speed of the wave, whereas water does not. Individual molecules of water simply move back and forth, up and down, in a generally circular motion. […] The greater the wind force, the bigger the wave, the more energy stored within its bulk, and the more energy released when it eventually breaks. The amount of energy is enormous. Over long periods of time, whole coastlines retreat before the pounding waves – cliffs topple, rocks are worn to pebbles, pebbles to sand, and so on. Individual storm waves can exert instantaneous pressures of up to 30,000 kilograms […] per square metre. […] The rate at which energy is transferred across the ocean is the same as the velocity of the wave. […] waves typically travel at speeds of 30-40 kilometres per hour, and […] waves with a greater wavelength will travel faster than those with a shorter wavelength. […] With increasing wind speed and duration over which the wind blows, the wave height, period, and length all increase. The distance over which the wind blows is known as fetch, and is critical in influencing the growth of waves — the greater the area of ocean over which a storm blows, then the larger and more powerful the waves generated. The three stages in wave development are known as sea, swell, and surf. […] The ocean is highly efficient at transmitting energy. Water offers so little resistance to the small orbital motion of water particles in waves that individual wave trains may continue for thousands of kilometres. […] When the wave train encounters shallow water — say 50 metres for a 100-metre wavelength — the waves first feel the bottom and begin to slow down in response to frictional resistance. Wavelength decreases, the crests bunch closer together, and wave height increases until the wave becomes unstable and topples forwards as surf. […] Very often, waves approach obliquely to the coast and set up a significant transfer of water and sediment along the shoreline. The long-shore currents so developed can be very powerful, removing beach sand and building out spits and bars across the mouths of estuaries.” (People who’re interested in knowing more about these topics will probably enjoy Fredric Raichlen’s book on these topics – I did, US.)

“Wind is the principal force that drives surface currents, but the pattern of circulation results from a more complex interaction of wind drag, pressure gradients, and Coriolis deflection. Wind drag is a very inefficient process by which the momentum of moving air molecules is transmitted to water molecules at the ocean surface setting them in motion. The speed of water molecules (the current), initially in the direction of the wind, is only about 3–4 per cent of the wind speed. This means that a wind blowing constantly over a period of time at 50 kilometres per hour will produce a water current of about 1 knot (2 kilometres per hour). […] Although the movement of wind may seem random, changing from one day to the next, surface winds actually blow in a very regular pattern on a planetary scale. The subtropics are known for the trade winds with their strong easterly component, and the mid-latitudes for persistent westerlies. Wind drag by such large-scale wind systems sets the ocean waters in motion. The trade winds produce a pair of equatorial currents moving to the west in each ocean, while the westerlies drive a belt of currents that flow to the east at mid-latitudes in both hemispheres. […] Deflection by the Coriolis force and ultimately by the position of the continents creates very large oval-shaped gyres in each ocean.”

“The control exerted by the oceans is an integral and essential part of the global climate system. […] The oceans are one of the principal long-term stores on Earth for carbon and carbon dioxide […] The oceans are like a gigantic sponge holding fifty times more carbon dioxide than the atmosphere […] the sea surface acts as a two-way control valve for gas transfer, which opens and closes in response to two key properties – gas concentration and ocean stirring. First, the difference in gas concentration between the air and sea controls the direction and rate of gas exchange. Gas concentration in water depends on temperature—cold water dissolves more carbon dioxide than warm water, and on biological processes—such as photosynthesis and respiration by microscopic plants, animals, and bacteria that make up the plankton. These transfer processes affect all gases […]. Second, the strength of the ocean-stirring process, caused by wind and foaming waves, affects the ease with which gases are absorbed at the surface. More gas is absorbed during stormy weather and, once dissolved, is quickly mixed downwards by water turbulence. […] The transfer of heat, moisture, and other gases between the ocean and atmosphere drives small-scale oscillations in climate. The El Niño Southern Oscillation (ENSO) is the best known, causing 3–7-year climate cycles driven by the interaction of sea-surface temperature and trade winds along the equatorial Pacific. The effects are worldwide in their impact through a process of atmospheric teleconnection — causing floods in Europe and North America, monsoon failure and severe drought in India, South East Asia, and Australia, as well as decimation of the anchovy fishing industry off Peru.”

“Earth’s climate has not always been as it is today […] About 100 million years ago, for example, palm trees and crocodiles lived as far north as 80°N – the equivalent of Arctic Canada or northern Greenland today. […] Most of the geological past has enjoyed warm conditions. These have been interrupted at irregular intervals by cold and glacial climates of altogether shorter duration […][,] the last [of them] beginning around 3 million years ago. We are still in the grip of this last icehouse state, although in one of its relatively brief interglacial phases. […] Sea level has varied in the past in close consort with climate change […]. Around twenty-five thousand years ago, at the height of the last Ice Age, the global sea level was 120 metres lower than today. Huge tracts of the continental shelves that rim today’s landmasses were exposed. […] Further back in time, 80 million years ago, the sea level was around 250–350 metres higher than today, so that 82 per cent of the planet was ocean and only 18 per cent remained as dry land. Such changes have been the norm throughout geological history and entirely the result of natural causes.”

“Most of the solar energy absorbed by seawater is converted directly to heat, and water temperature is vital for the distribution and activity of life in the oceans. Whereas mean temperature ranges from 0 to 40 degrees Celsius, 90 per cent of the oceans are permanently below 5°C. Most marine animals are ectotherms (cold-blooded), which means that they obtain their body heat from their surroundings. They generally have narrow tolerance limits and are restricted to particular latitudinal belts or water depths. Marine mammals and birds are endotherms (warm-blooded), which means that their metabolism generates heat internally thereby allowing the organism to maintain constant body temperature. They can tolerate a much wider range of external conditions. Coping with the extreme (hydrostatic) pressure exerted at depth within the ocean is a challenge. For every 30 metres of water, the pressure increases by 3 atmospheres – roughly equivalent to the weight of an elephant.”

“There are at least 6000 different species of diatom. […] An average litre of surface water from the ocean contains over half a million diatoms and other unicellular phytoplankton and many thousands of zooplankton.”

“Several different styles of movement are used by marine organisms. These include floating, swimming, jet propulsion, creeping, crawling, and burrowing. […] The particular physical properties of water that most affect movement are density, viscosity, and buoyancy. Seawater is about 800 times denser than air and nearly 100 times more viscous. Consequently there is much more resistance on movement than on land […] Most large marine animals, including all fishes and mammals, have adopted some form of active swimming […]. Swimming efficiency in fishes has been achieved by minimizing the three types of drag resistance created by friction, turbulence, and body form. To reduce surface friction, the body must be smooth and rounded like a sphere. The scales of most fish are also covered with slime as further lubrication. To reduce form drag, the cross-sectional area of the body should be minimal — a pencil shape is ideal. To reduce the turbulent drag as water flows around the moving body, a rounded front end and tapered rear is required. […] Fins play a versatile role in the movement of a fish. There are several types including dorsal fins along the back, caudal or tail fins, and anal fins on the belly just behind the anus. Operating together, the beating fins provide stability and steering, forwards and reverse propulsion, and braking. They also help determine whether the motion is up or down, forwards or backwards.”

Links:

Rip current.
Rogue wave. Agulhas Current. Kuroshio Current.
Tsunami.
Tide. Tidal range.
Geostrophic current.
Ekman Spiral. Ekman transport. Upwelling.
Global thermohaline circulation system. Antarctic bottom water. North Atlantic Deep Water.
Rio Grande Rise.
Denmark Strait. Denmark Strait cataract (/waterfall?).
Atmospheric circulation. Jet streams.
Monsoon.
Cyclone. Tropical cyclone.
Ozone layer. Ozone depletion.
Milankovitch cycles.
Little Ice Age.
Oxygen Isotope Stratigraphy of the Oceans.
Contourite.
Earliest known life forms. Cyanobacteria. Prokaryote. Eukaryote. Multicellular organism. Microbial mat. Ediacaran. Cambrian explosion. Pikaia. Vertebrate. Major extinction events. Permian–Triassic extinction event. (The author seems to disagree with the authors of this article about potential causes, in particular in so far as they relate to the formation of Pangaea – as I felt uncertain about the accuracy of the claims made in the book I decided against covering this topic in this post, even though I find it interesting).
Tethys Ocean.
Plesiosauria. Pliosauroidea. Ichthyosaur. Ammonoidea. Belemnites. Pachyaena. Cetacea.
Pelagic zone. Nekton. Benthic zone. Neritic zone. Oceanic zone. Bathyal zone. Hadal zone.
Phytoplankton. Silicoflagellates. Coccolithophore. Dinoflagellate. Zooplankton. Protozoa. Tintinnid. Radiolaria. Copepods. Krill. Bivalves.
Elasmobranchii.
Ampullae of Lorenzini. Lateral line.
Baleen whale. Humpback whale.
Coral reef.
Box jellyfish. Stonefish.
Horseshoe crab.
Greenland shark. Giant squid.
Hydrothermal vent. Pompeii worms.
Atlantis II Deep. Aragonite. Phosphorite. Deep sea mining. Oil platform. Methane clathrate.
Ocean thermal energy conversion. Tidal barrage.
Mariculture.
Exxon Valdez oil spill.
Bottom trawling.

June 24, 2018 Posted by | Biology, Books, Engineering, Geology, Paleontology, Physics | Leave a comment