Econstudentlog

Supermassive BHs Mergers

This is the first post I’ve posted in a while; as mentioned earlier the blogging hiatus was due to internet connectivity issues secondary to me moving. Those issues should now have been solved and I hope to soon get back to blogging regularly.

Some links related to the lecture’s coverage:

Supermassive black hole.
Binary black hole. Final parsec problem.
LIGO (Laser Interferometer Gravitational-Wave Observatory). Laser Interferometer Space Antenna (LISA).
Dynamical friction.
Science with the space-based interferometer eLISA: Supermassive black hole binaries (Klein et al., 2016).
Off the Beaten Path: A New Approach to Realistically Model The Orbital Decay of Supermassive Black Holes in Galaxy Formation Simulations (Tremmel et al., 2015).
Dancing to ChaNGa: A Self-Consistent Prediction For Close SMBH Pair Formation Timescales Following Galaxy Mergers (Tremmel et al, 2017).
Growth and activity of black holes in galaxy mergers with varying mass ratios (Capelo et al., 2015).
Tidal heating. Tidal stripping.
Nuclear coups: dynamics of black holes in galaxy mergers (Wassenhove et al., 2013).
The birth of a supermassive black hole binary (Pfister et al., 2017).
Massive black holes and gravitational waves (I assume this is the lecturer’s own notes for a similar talk held at another point in time – there’s a lot of overlap between these notes and stuff covered in the lecture, so if you’re curious you could go have a look. As far as I could see all figures in the second half of the link, as well as a few of the earlier ones, are figures which were also included in this lecture).

Advertisements

September 18, 2018 Posted by | Astronomy, Lectures, Physics | Leave a comment

Brief update

I recently moved, and it’s taking a lot longer than I’d have liked to get a new internet connection set up. I probably won’t blog much, if at all, in the next couple of weeks.

September 5, 2018 Posted by | Personal | Leave a comment

A few diabetes papers of interest

i. Islet Long Noncoding RNAs: A Playbook for Discovery and Characterization.

“This review will 1) highlight what is known about lncRNAs in the context of diabetes, 2) summarize the strategies used in lncRNA discovery pipelines, and 3) discuss future directions and the potential impact of studying the role of lncRNAs in diabetes.”

“Decades of mouse research and advances in genome-wide association studies have identified several genetic drivers of monogenic syndromes of β-cell dysfunction, as well as 113 distinct type 2 diabetes (T2D) susceptibility loci (1) and ∼60 loci associated with an increased risk of developing type 1 diabetes (T1D) (2). Interestingly, these studies discovered that most T1D and T2D susceptibility loci fall outside of coding regions, which suggests a role for noncoding elements in the development of disease (3,4). Several studies have demonstrated that many causal variants of diabetes are significantly enriched in regions containing islet enhancers, promoters, and transcription factor binding sites (5,6); however, not all diabetes susceptibility loci can be explained by associations with these regulatory regions. […] Advances in RNA sequencing (RNA-seq) technologies have revealed that mammalian genomes encode tens of thousands of RNA transcripts that have similar features to mRNAs, yet are not translated into proteins (7). […] detailed characterization of many of these transcripts has challenged the idea that the central role for RNA in a cell is to give rise to proteins. Instead, these RNA transcripts make up a class of molecules called noncoding RNAs (ncRNAs) that function either as “housekeeping” ncRNAs, such as transfer RNAs (tRNAs) and ribosomal RNAs (rRNAs), that are expressed ubiquitously and are required for protein synthesis or as “regulatory” ncRNAs that control gene expression. While the functional mechanisms of short regulatory ncRNAs, such as microRNAs (miRNAs), small interfering RNAs (siRNAs), and Piwi-interacting RNAs (piRNAs), have been described in detail (810), the most abundant and functionally enigmatic regulatory ncRNAs are called long noncoding RNAs (lncRNAs) that are loosely defined as RNAs larger than 200 nucleotides (nt) that do not encode for protein (1113). Although using a definition based strictly on size is somewhat arbitrary, this definition is useful both bioinformatically […] and technically […]. While the 200-nt size cutoff has simplified identification of lncRNAs, this rather broad classification means several features of lncRNAs, including abundance, cellular localization, stability, conservation, and function, are inherently heterogeneous (1517). Although this represents one of the major challenges of lncRNA biology, it also highlights the untapped potential of lncRNAs to provide a novel layer of gene regulation that influences islet physiology and pathophysiology.”

“Although the role of miRNAs in diabetes has been well established (9), analyses of lncRNAs in islets have lagged behind their short ncRNA counterparts. However, several recent studies provide evidence that lncRNAs are crucial components of the islet regulome and may have a role in diabetes (27). […] misexpression of several lncRNAs has been correlated with diabetes complications, such as diabetic nephropathy and retinopathy (2931). There are also preliminary studies suggesting that circulating lncRNAs, such as Gas5, MIAT1, and SENCR, may represent effective molecular biomarkers of diabetes and diabetes-related complications (32,33). Finally, several recent studies have explored the role of lncRNAs in the peripheral metabolic tissues that contribute to energy homeostasis […]. In addition to their potential as genetic drivers and/or biomarkers of diabetes and diabetes complications, lncRNAs can be exploited for the treatment of diabetes. For example, although tremendous efforts have been dedicated to generating replacement β-cells for individuals with diabetes (35,36), human pluripotent stem cell–based β-cell differentiation protocols remain inefficient, and the end product is still functionally and transcriptionally immature compared with primary human β-cells […]. This is largely due to our incomplete knowledge of in vivo differentiation regulatory pathways, which likely include a role for lncRNAs. […] Inherent characteristics of lncRNAs have also made them attractive candidates for drug targeting, which could be exploited for developing new diabetes therapies.”

“With the advancement of high-throughput sequencing techniques, the list of islet-specific lncRNAs is growing exponentially; however, functional characterization is missing for the majority of these lncRNAs. […] Tens of thousands of lncRNAs have been identified in different cell types and model organisms; however, their functions largely remain unknown. Although the tools for determining lncRNA function are technically restrictive, uncovering novel regulatory mechanisms will have the greatest impact on understanding islet function and identifying novel therapeutics for diabetes. To date, no biochemical assay has been used to directly determine the molecular mechanisms by which islet lncRNAs function, which highlights both the infancy of the field and the difficulty in implementing these techniques. […] Due to the infancy of the lncRNA field, most of the biochemical and genetic tools used to interrogate lncRNA function have only recently been developed or are adapted from techniques used to study protein-coding genes and we are only beginning to appreciate the limits and challenges of borrowing strategies from the protein-coding world.”

“The discovery of lncRNAs as a novel class of tissue-specific regulatory molecules has spawned an exciting new field of biology that will significantly impact our understanding of pancreas physiology and pathophysiology. As the field continues to grow, there is growing appreciation that lncRNAs will provide many of the missing components to existing molecular pathways that regulate islet biology and contribute to diabetes when they become dysfunctional. However, to date, most of the experimental emphasis on lncRNAs has focused on large-scale discovery using genome-wide approaches, and there remains a paucity of functional analysis.”

ii. Diabetes and Trajectories of Estimated Glomerular Filtration Rate: A Prospective Cohort Analysis of the Atherosclerosis Risk in Communities Study.

“Diabetes is among the strongest common risk factors for end-stage renal disease, and in industrialized countries, diabetes contributes to ∼50% of cases (3). Less is known about the pattern of kidney function decline associated with diabetes that precedes end-stage renal disease. Identifying patterns of estimated glomerular filtration rate (eGFR) decline could inform monitoring practices for people at high risk of chronic kidney disease (CKD) progression. A better understanding of when and in whom eGFR decline occurs would be useful for the design of clinical trials because eGFR decline >30% is now often used as a surrogate end point for CKD progression (4). Trajectories among persons with diabetes are of particular interest because of the possibility for early intervention and the prevention of CKD development. However, eGFR trajectories among persons with new diabetes may be complex due to the hypothesized period of hyperfiltration by which GFR increases, followed by progressive, rapid decline (5). Using data from the Atherosclerosis Risk in Communities (ARIC) study, an ongoing prospective community-based cohort of >15,000 participants initiated in 1987 with serial measurements of creatinine over 26 years, our aim was to characterize patterns of eGFR decline associated with diabetes, identify demographic, genetic, and modifiable risk factors within the population with diabetes that were associated with steeper eGFR decline, and assess for evidence of early hyperfiltration.”

“We categorized people into groups of no diabetes, undiagnosed diabetes, and diagnosed diabetes at baseline (visit 1) and compared baseline clinical characteristics using ANOVA for continuous variables and Pearson χ2 tests for categorical variables. […] To estimate individual eGFR slopes over time, we used linear mixed-effects models with random intercepts and random slopes. These models were fit on diabetes status at baseline as a nominal variable to adjust the baseline level of eGFR and included an interaction term between diabetes status at baseline and time to estimate annual decline in eGFR by diabetes categories. Linear mixed models were run unadjusted and adjusted, with the latter model including the following diabetes and kidney disease–related risk factors: age, sex, race–center, BMI, systolic blood pressure, hypertension medication use, HDL, prevalent coronary heart disease, annual family income, education status, and smoking status, as well as each variable interacted with time. Continuous covariates were centered at the analytic population mean. We tested model assumptions and considered different covariance structures, comparing nested models using Akaike information criteria. We identified the unstructured covariance model as the most optimal and conservative approach. From the mixed models, we described the overall mean annual decline by diabetes status at baseline and used the random effects to estimate best linear unbiased predictions to describe the distributions of yearly slopes in eGFR by diabetes status at baseline and displayed them using kernel density plots.”

“Because of substantial variation in annual eGFR slope among people with diagnosed diabetes, we sought to identify risk factors that were associated with faster decline. Among those with diagnosed diabetes, we compared unadjusted and adjusted mean annual decline in eGFR by race–APOL1 risk status (white, black– APOL1 low risk, and black–APOL1 high risk) [here’s a relevant link, US], systolic blood pressure […], smoking status […], prevalent coronary heart disease […], diabetes medication use […], HbA1c […], and 1,5-anhydroglucitol (≥10 and <10 μg/mL) [relevant link, US]. Because some of these variables were only available at visit 2, we required that participants included in this subgroup analysis attend both visits 1 and 2 and not be missing information on APOL1 or the variables assessed at visit 2 to ensure a consistent sample size. In addition to diabetes and kidney disease–related risk factors in the adjusted model, we also included diabetes medication use and HbA1c to account for diabetes severity in these analyses. […] to explore potential hyperfiltration, we used a linear spline model to allow the slope to change for each diabetes category between the first 3 years of follow-up (visit 1 to visit 2) and the subsequent time period (visit 2 to visit 5).”

“There were 15,517 participants included in the analysis: 13,698 (88%) without diabetes, 634 (4%) with undiagnosed diabetes, and 1,185 (8%) with diagnosed diabetes at baseline. […] At baseline, participants with undiagnosed and diagnosed diabetes were older, more likely to be black or have hypertension and coronary heart disease, and had higher mean BMI and lower mean HDL compared with those without diabetes […]. Income and education levels were also lower among those with undiagnosed and diagnosed diabetes compared with those without diabetes. […] Overall, there was a nearly linear association between eGFR and age over time, regardless of diabetes status […]. The crude mean annual decline in eGFR was slowest among those without diabetes at baseline (decline of −1.6 mL/min/1.73 m2/year [95% CI −1.6 to −1.5]), faster among those with undiagnosed diabetes compared with those without diabetes (decline of −2.1 mL/min/1.73 m2/year [95% CI −2.2 to −2.0][…]), and nearly twice as rapid among those with diagnosed diabetes compared with those without diabetes (decline of −2.9 mL/min/1.73 m2/year [95% CI −3.0 to −2.8][…]). Adjustment for diabetes and kidney disease–related risk factors attenuated the results slightly, but those with undiagnosed and diagnosed diabetes still had statistically significantly steeper declines than those without diabetes (decline among no diabetes −1.4 mL/min/1.73 m2/year [95% CI −1.5 to −1.4] and decline among undiagnosed diabetes −1.8 mL/min/1.73 m2/year [95% CI −2.0 to −1.7], difference vs. no diabetes of −0.4 mL/min/1.73 m2/year [95% CI −0.5 to −0.3; P < 0.001]; decline among diagnosed diabetes −2.5 mL/min/1.73 m2/year [95% CI −2.6 to −2.4], difference vs. no diabetes of −1.1 mL/min/1.73 m2/ year [95% CI −1.2 to −1.0; P < 0.001]). […] The decline in eGFR per year varied greatly across individuals, particularly among those with diabetes at baseline […] Among participants with diagnosed diabetes at baseline, those who were black, had systolic blood pressure ≥140 mmHg, used diabetes medications, had an HbA1c ≥7% [≥53 mmol/mol], or had 1,5-anhydroglucitol <10 μg/mL were at risk for steeper annual declines than their counterparts […]. Smoking status and prevalent coronary heart disease were not associated with significantly steeper eGFR decline in unadjusted analyses. Adjustment for risk factors, diabetes medication use, and HbA1c attenuated the differences in decline for all subgroups with the exception of smoking status, leaving black race along with APOL1-susceptible genotype, systolic blood pressure ≥140 mmHg, current smoking, insulin use, and HbA1c ≥9% [≥75 mmol/mol] as the risk factors indicative of steeper decline.”

CONCLUSIONS Diabetes is an important risk factor for kidney function decline. Those with diagnosed diabetes declined almost twice as rapidly as those without diabetes. Among people with diagnosed diabetes, steeper declines were seen in those with modifiable risk factors, including hypertension and glycemic control, suggesting areas for continued targeting in kidney disease prevention. […] Few other community-based studies have evaluated differences in kidney function decline by diabetes status over a long period through mid- and late life. One study of 10,184 Canadians aged ≥66 years with creatinine measured during outpatient visits showed results largely consistent with our findings but with much shorter follow-up (median of 2 years) (19). Other studies of eGFR change in a general population have found smaller declines than our results (20,21). A study conducted in Japanese participants aged 40–79 years found a decline of only −0.4 mL/min/1.73 m2/year over the course of two assessments 10 years apart (compared with our estimate among those without diabetes: −1.6 mL/min/1.73 m2/year). This is particularly interesting, as Japan is known to have a higher prevalence of CKD and end-stage renal disease than the U.S. (20). However, this study evaluated participants over a shorter time frame and required attendance at both assessments, which may have decreased the likelihood of capturing severe cases and resulted in underestimation of decline.”

“The Baltimore Longitudinal Study of Aging also assessed kidney function over time in a general population of 446 men, ranging in age from 22 to 97 years at baseline, each with up to 14 measurements of creatinine clearance assessed between 1958 and 1981 (21). They also found a smaller decline than we did (−0.8 mL/min/year), although this study also had notable differences. Their main analysis excluded participants with hypertension and history of renal disease or urinary tract infection and those treated with diuretics and/or antihypertensive medications. Without those exclusions, their overall estimate was −1.1 mL/min/year, which better reflects a community-based population and our results. […] In our evaluation of risk factors that might explain the variation in decline seen among those with diagnosed diabetes, we observed that black race, systolic blood pressure ≥140 mmHg, insulin use, and HbA1c ≥9% (≥75 mmol/mol) were particularly important. Although the APOL1 high-risk genotype is a known risk factor for eGFR decline, African Americans with low-risk APOL1 status continued to be at higher risk than whites even after adjustment for traditional risk factors, diabetes medication use, and HbA1c.”

“Our results are relevant to the design and conduct of clinical trials. Hard clinical outcomes like end-stage renal disease are relatively rare, and a 30–40% decline in eGFR is now accepted as a surrogate end point for CKD progression (4). We provide data on patient subgroups that may experience accelerated trajectories of kidney function decline, which has implications for estimating sample size and ensuring adequate power in future clinical trials. Our results also suggest that end points of eGFR decline might not be appropriate for patients with new-onset diabetes, in whom declines may actually be slower than among persons without diabetes. Slower eGFR decline among those with undiagnosed diabetes, who are likely early in the course of diabetes, is consistent with the hypothesis of hyperfiltration. Similar to other studies, we found that persons with undiagnosed diabetes had higher GFR at the outset, but this was a transient phenomenon, as they ultimately experienced larger declines in kidney function than those without diabetes over the course of follow-up (2325). Whether hyperfiltration is a universal aspect of early disease and, if not, whether it portends worse long-term outcomes is uncertain. Existing studies investigating hyperfiltration as a precursor to adverse kidney outcomes are inconsistent (24,26,27) and often confounded by diabetes severity factors like duration (27). We extended this literature by separating undiagnosed and diagnosed diabetes to help address that confounding.”

iii. Saturated Fat Is More Metabolically Harmful for the Human Liver Than Unsaturated Fat or Simple Sugars.

OBJECTIVE Nonalcoholic fatty liver disease (i.e., increased intrahepatic triglyceride [IHTG] content), predisposes to type 2 diabetes and cardiovascular disease. Adipose tissue lipolysis and hepatic de novo lipogenesis (DNL) are the main pathways contributing to IHTG. We hypothesized that dietary macronutrient composition influences the pathways, mediators, and magnitude of weight gain-induced changes in IHTG.

RESEARCH DESIGN AND METHODS We overfed 38 overweight subjects (age 48 ± 2 years, BMI 31 ± 1 kg/m2, liver fat 4.7 ± 0.9%) 1,000 extra kcal/day of saturated (SAT) or unsaturated (UNSAT) fat or simple sugars (CARB) for 3 weeks. We measured IHTG (1H-MRS), pathways contributing to IHTG (lipolysis ([2H5]glycerol) and DNL (2H2O) basally and during euglycemic hyperinsulinemia), insulin resistance, endotoxemia, plasma ceramides, and adipose tissue gene expression at 0 and 3 weeks.

RESULTS Overfeeding SAT increased IHTG more (+55%) than UNSAT (+15%, P < 0.05). CARB increased IHTG (+33%) by stimulating DNL (+98%). SAT significantly increased while UNSAT decreased lipolysis. SAT induced insulin resistance and endotoxemia and significantly increased multiple plasma ceramides. The diets had distinct effects on adipose tissue gene expression.”

CONCLUSIONS NAFLD has been shown to predict type 2 diabetes and cardiovascular disease in multiple studies, even independent of obesity (1), and also to increase the risk of progressive liver disease (17). It is therefore interesting to compare effects of different diets on liver fat content and understand the underlying mechanisms. We examined whether provision of excess calories as saturated (SAT) or unsaturated (UNSAT) fats or simple sugars (CARB) influences the metabolic response to overfeeding in overweight subjects. All overfeeding diets increased IHTGs. The SAT diet induced a greater increase in IHTGs than the UNSAT diet. The composition of the diet altered sources of excess IHTGs. The SAT diet increased lipolysis, whereas the CARB diet stimulated DNL. The SAT but not the other diets increased multiple plasma ceramides, which increase the risk of cardiovascular disease independent of LDL cholesterol (18). […] Consistent with current dietary recommendations (3638), the current study shows that saturated fat is the most harmful dietary constituent regarding IHTG accumulation.”

iv. Primum Non Nocere: Refocusing Our Attention on Severe Hypoglycemia Prevention.

“Severe hypoglycemia, defined as low blood glucose requiring assistance for recovery, is arguably the most dangerous complication of type 1 diabetes as it can result in permanent cognitive impairment, seizure, coma, accidents, and death (1,2). Since the Diabetes Control and Complications Trial (DCCT) demonstrated that intensive intervention to normalize glucose prevents long-term complications but at the price of a threefold increase in the rate of severe hypoglycemia (3), hypoglycemia has been recognized as the major limitation to achieving tight glycemic control. Severe hypoglycemia remains prevalent among adults with type 1 diabetes, ranging from ∼1.4% per year in the DCCT/EDIC (Epidemiology of Diabetes Interventions and Complications) follow-up cohort (4) to ∼8% in the T1D Exchange clinic registry (5).

One the greatest risk factors for severe hypoglycemia is impaired awareness of hypoglycemia (6), which increases risk up to sixfold (7,8). Hypoglycemia unawareness results from deficient counterregulation (9), where falling glucose fails to activate the autonomic nervous system to produce neuroglycopenic symptoms that normally help patients identify and respond to episodes (i.e., sweating, palpitations, hunger) (2). An estimated 20–25% of adults with type 1 diabetes have impaired hypoglycemia awareness (8), which increases to more than 50% after 25 years of disease duration (10).

Screening for hypoglycemia unawareness to identify patients at increased risk of severe hypoglycemic events should be part of routine diabetes care. Self-identified impairment in awareness tends to agree with clinical evaluation (11). Therefore, hypoglycemia unawareness can be easily and effectively screened […] Interventions for hypoglycemia unawareness include a range of behavioral and medical options. Avoiding hypoglycemia for at least several weeks may partially reverse hypoglycemia unawareness and reduce risk of future episodes (1). Therefore, patients with hypoglycemia and unawareness may be advised to raise their glycemic and HbA1c targets (1,2). Diabetes technology can play a role, including continuous subcutaneous insulin infusion (CSII) to optimize insulin delivery, continuous glucose monitoring (CGM) to give technological awareness in the absence of symptoms (14), or the combination of the two […] Aside from medical management, structured or hypoglycemia-specific education programs that aim to prevent hypoglycemia are recommended for all patients with severe hypoglycemia or hypoglycemia unawareness (14). In randomized trials, psychoeducational programs that incorporate increased education, identification of personal risk factors, and behavior change support have improved hypoglycemia unawareness and reduced the incidence of both nonsevere and severe hypoglycemia over short periods of follow-up (17,18) and extending up to 1 year (19).”

“Given that the presence of hypoglycemia unawareness increases the risk of severe hypoglycemia, which is the strongest predictor of a future episode (2,4), the implication that intervention can break the life-threatening and traumatizing cycle of hypoglycemia unawareness and severe hypoglycemia cannot be overstated. […] new evidence of durability of effect across treatment regimen without increasing the risk for long-term complications creates an imperative for action. In combination with existing screening tools and a body of literature investigating novel interventions for hypoglycemia unawareness, these results make the approach of screening, recognition, and intervention very compelling as not only a best practice but something that should be incorporated in universal guidelines on diabetes care, particularly for individuals with type 1 diabetes […] Hyperglycemia is […] only part of the puzzle in diabetes management. Long-term complications are decreasing across the population with improved interventions and their implementation (24). […] it is essential to shift our historical obsession with hyperglycemia and its long-term complications to equally emphasize the disabling, distressing, and potentially fatal near-term complication of our treatments, namely severe hypoglycemia. […] The health care providers’ first dictum is primum non nocere — above all, do no harm. ADA must refocus our attention on severe hypoglycemia as an iatrogenic and preventable complication of our interventions.”

v. Anti‐vascular endothelial growth factor combined with intravitreal steroids for diabetic macular oedema.

“Background

The combination of steroid and anti‐vascular endothelial growth factor (VEGF) intravitreal therapeutic agents could potentially have synergistic effects for treating diabetic macular oedema (DMO). On the one hand, if combined treatment is more effective than monotherapy, there would be significant implications for improving patient outcomes. Conversely, if there is no added benefit of combination therapy, then people could be potentially exposed to unnecessary local or systemic side effects.

Objectives

To assess the effects of intravitreal agents that block vascular endothelial growth factor activity (anti‐VEGF agents) plus intravitreal steroids versus monotherapy with macular laser, intravitreal steroids or intravitreal anti‐VEGF agents for managing DMO.”

“There were eight RCTs (703 participants, 817 eyes) that met our inclusion criteria with only three studies reporting outcomes at one year. The studies took place in Iran (3), USA (2), Brazil (1), Czech Republic (1) and South Korea (1). […] When comparing anti‐VEGF/steroid with anti‐VEGF monotherapy as primary therapy for DMO, we found no meaningful clinical difference in change in BCVA [best corrected visual acuity] […] or change in CMT [central macular thickness] […] at one year. […] There was very low‐certainty evidence on intraocular inflammation from 8 studies, with one event in the anti‐VEGF/steroid group (313 eyes) and two events in the anti‐VEGF group (322 eyes). There was a greater risk of raised IOP (Peto odds ratio (OR) 8.13, 95% CI 4.67 to 14.16; 635 eyes; 8 RCTs; moderate‐certainty evidence) and development of cataract (Peto OR 7.49, 95% CI 2.87 to 19.60; 635 eyes; 8 RCTs; moderate‐certainty evidence) in eyes receiving anti‐VEGF/steroid compared with anti‐VEGF monotherapy. There was low‐certainty evidence from one study of an increased risk of systemic adverse events in the anti‐VEGF/steroid group compared with the anti‐VEGF alone group (Peto OR 1.32, 95% CI 0.61 to 2.86; 103 eyes).”

“One study compared anti‐VEGF/steroid versus macular laser therapy. At one year investigators did not report a meaningful difference between the groups in change in BCVA […] or change in CMT […]. There was very low‐certainty evidence suggesting an increased risk of cataract in the anti‐VEGF/steroid group compared with the macular laser group (Peto OR 4.58, 95% 0.99 to 21.10, 100 eyes) and an increased risk of elevated IOP in the anti‐VEGF/steroid group compared with the macular laser group (Peto OR 9.49, 95% CI 2.86 to 31.51; 100 eyes).”

“Authors’ conclusions

Combination of intravitreal anti‐VEGF plus intravitreal steroids does not appear to offer additional visual benefit compared with monotherapy for DMO; at present the evidence for this is of low‐certainty. There was an increased rate of cataract development and raised intraocular pressure in eyes treated with anti‐VEGF plus steroid versus anti‐VEGF alone. Patients were exposed to potential side effects of both these agents without reported additional benefit.”

vi. Association between diabetic foot ulcer and diabetic retinopathy.

“More than 25 million people in the United States are estimated to have diabetes mellitus (DM), and 15–25% will develop a diabetic foot ulcer (DFU) during their lifetime [1]. DFU is one of the most serious and disabling complications of DM, resulting in significantly elevated morbidity and mortality. Vascular insufficiency and associated neuropathy are important predisposing factors for DFU, and DFU is the most common cause of non-traumatic foot amputation worldwide. Up to 70% of all lower leg amputations are performed on patients with DM, and up to 85% of all amputations are preceded by a DFU [2, 3]. Every year, approximately 2–3% of all diabetic patients develop a foot ulcer, and many require prolonged hospitalization for the treatment of ensuing complications such as infection and gangrene [4, 5].

Meanwhile, a number of studies have noted that diabetic retinopathy (DR) is associated with diabetic neuropathy and microvascular complications [610]. Despite the magnitude of the impact of DFUs and their consequences, little research has been performed to investigate the characteristics of patients with a DFU and DR. […] the aim of this study was to investigate the prevalence of DR in patients with a DFU and to elucidate the potential association between DR and DFUs.”

“A retrospective review was conducted on DFU patients who underwent ophthalmic and vascular examinations within 6 months; 100 type 2 diabetic patients with DFU were included. The medical records of 2496 type 2 diabetic patients without DFU served as control data. DR prevalence and severity were assessed in DFU patients. DFU patients were compared with the control group regarding each clinical variable. Additionally, DFU patients were divided into two groups according to DR severity and compared. […] Out of 100 DFU patients, 90 patients (90%) had DR and 55 (55%) had proliferative DR (PDR). There was no significant association between DR and DFU severities (R = 0.034, p = 0.734). A multivariable analysis comparing type 2 diabetic patients with and without DFUs showed that the presence of DR [OR, 226.12; 95% confidence interval (CI), 58.07–880.49; p < 0.001] and proliferative DR [OR, 306.27; 95% CI, 64.35–1457.80; p < 0.001), higher HbA1c (%, OR, 1.97, 95% CI, 1.46–2.67; p < 0.001), higher serum creatinine (mg/dL, OR, 1.62, 95% CI, 1.06–2.50; p = 0.027), older age (years, OR, 1.12; 95% CI, 1.06–1.17; p < 0.001), higher pulse pressure (mmHg, OR, 1.03; 95% CI, 1.00–1.06; p = 0.025), lower cholesterol (mg/dL, OR, 0.94; 95% CI, 0.92–0.97; p < 0.001), lower BMI (kg/m2, OR, 0.87, 95% CI, 0.75–1.00; p = 0.044) and lower hematocrit (%, OR, 0.80, 95% CI, 0.74–0.87; p < 0.001) were associated with DFUs. In a subgroup analysis of DFU patients, the PDR group had a longer duration of diabetes mellitus, higher serum BUN, and higher serum creatinine than the non-PDR group. In the multivariable analysis, only higher serum creatinine was associated with PDR in DFU patients (OR, 1.37; 95% CI, 1.05–1.78; p = 0.021).

Conclusions

Diabetic retinopathy is prevalent in patients with DFU and about half of DFU patients had PDR. No significant association was found in terms of the severity of these two diabetic complications. To prevent blindness, patients with DFU, and especially those with high serum creatinine, should undergo retinal examinations for timely PDR diagnosis and management.”

August 29, 2018 Posted by | Diabetes, Epidemiology, Genetics, Medicine, Molecular biology, Nephrology, Ophthalmology, Statistics, Studies | Leave a comment

Nephrology Board Review

Some links related to the lecture’s coverage:

Diabetic nephropathy.
Henoch–Schönlein purpura.
Leukocytoclastic Vasculitis.
Glomerulonephritis. Rapidly progressive glomerulonephritis.
Nephrosis.
Analgesic nephropathy.
Azotemia.
Allergic Interstitial Nephritis: Clinical Features and Pathogenesis.
Nonsteroidal anti-inflammatory drugs: effects on kidney function (Whelton & Hamilton, J Clin Pharmacol. 1991 Jul;31(7):588-98).
Goodpasture syndrome.
Creatinine. Limitations of serum creatinine as a marker of renal function.
Hyperkalemia.
U wave.
Nephrolithiasis. Calcium oxalate.
Calcium gluconate.
Bicarbonate.
Effect of various therapeutic approaches on plasma potassium and major regulating factors in terminal renal failure (Blumberg et al., 1988).
Effect of prolonged bicarbonate administration on plasma potassium in terminal renal failure (Blumberg et al., 1992).
Renal tubular acidosis.
Urine anion gap.
Metabolic acidosis.
Contrast-induced nephropathy.
Rhabdomyolysis.
Lipiduria. Urinary cast.
Membranous glomerulonephritis.
Postinfectious glomerulonephritis.

August 28, 2018 Posted by | Cardiology, Chemistry, Diabetes, Lectures, Medicine, Nephrology, Pharmacology, Studies | Leave a comment

Circadian Rhythms (I)

“Circadian rhythms are found in nearly every living thing on earth. They help organisms time their daily and seasonal activities so that they are synchronized to the external world and the predictable changes in the environment. These biological clocks provide a cross-cutting theme in biology and they are incredibly important. They influence everything, from the way growing sunflowers track the sun from east to west, to the migration timing of monarch butterflies, to the morning peaks in cardiac arrest in humans. […] Years of work underlie most scientific discoveries. Explaining these discoveries in a way that can be understood is not always easy. We have tried to keep the general reader in mind but in places perseverance on the part of the reader may be required. In the end we were guided by one of our reviewers, who said: ‘If you want to understand calculus you have to show the equations.’”

The above quote is from the book‘s foreword. I really liked this book and I was close to giving it five stars on goodreads. Below I have added some observations and links related to the first few chapters of the book’s coverage (as noted in my review on goodreads the second half of the book is somewhat technical, and I’ve not yet decided if I’ll be blogging that part of the book in much detail, if at all).

“There have been over a trillion dawns and dusks since life began some 3.8 billion years ago. […] This predictable daily solar cycle results in regular and profound changes in environmental light, temperature, and food availability as day follows night. Almost all life on earth, including humans, employs an internal biological timer to anticipate these daily changes. The possession of some form of clock permits organisms to optimize physiology and behaviour in advance of the varied demands of the day/night cycle. Organisms effectively ‘know’ the time of day. Such internally generated daily rhythms are called ‘circadian rhythms’ […] Circadian rhythms are embedded within the genomes of just about every plant, animal, fungus, algae, and even cyanobacteria […] Organisms that use circadian rhythms to anticipate the rotation of the earth are thought to have a major advantage over both their competitors and predators. For example, it takes about 20–30 minutes for the eyes of fish living among coral reefs to switch vision from the night to daytime state. A fish whose eyes are prepared in advance for the coming dawn can exploit the new environment immediately. The alternative would be to wait for the visual system to adapt and miss out on valuable activity time, or emerge into a world where it would be more difficult to avoid predators or catch prey until the eyes have adapted. Efficient use of time to maximize survival almost certainly provides a large selective advantage, and consequently all organisms seem to be led by such anticipation. A circadian clock also stops everything happening within an organism at the same time, ensuring that biological processes occur in the appropriate sequence or ‘temporal framework’. For cells to function properly they need the right materials in the right place at the right time. Thousands of genes have to be switched on and off in order and in harmony. […] All of these processes, and many others, take energy and all have to be timed to best effect by the millisecond, second, minute, day, and time of year. Without this internal temporal compartmentalization and its synchronization to the external environment our biology would be in chaos. […] However, to be biologically useful, these rhythms must be synchronized or entrained to the external environment, predominantly by the patterns of light produced by the earth’s rotation, but also by other rhythmic changes within the environment such as temperature, food availability, rainfall, and even predation. These entraining signals, or time-givers, are known as zeitgebers. The key point is that circadian rhythms are not driven by an external cycle but are generated internally, and then entrained so that they are synchronized to the external cycle.”

“It is worth emphasizing that the concept of an internal clock, as developed by Richter and Bünning, has been enormously powerful in furthering our understanding of biological processes in general, providing a link between our physiological understanding of homeostatic mechanisms, which try to maintain a constant internal environment despite unpredictable fluctuations in the external environment […], versus the circadian system which enables organisms to anticipate periodic changes in the external environment. The circadian system provides a predictive 24-hour baseline in physiological parameters, which is then either defended or temporarily overridden by homeostatic mechanisms that accommodate an acute environmental challenge. […] Zeitgebers and the entrainment pathway synchronize the internal day to the astronomical day, usually via the light/dark cycle, and multiple output rhythms in physiology and behaviour allow appropriately timed activity. The multitude of clocks within a multicellular organism can all potentially tick with a different phase angle […], but usually they are synchronized to each other and by a central pacemaker which is in turn entrained to the external world via appropriate zeitgebers. […] Most biological reactions vary greatly with temperature and show a Q10 temperature coefficient of about 2 […]. This means that the biological process or reaction rate doubles as a consequence of increasing the temperature by 10°C up to a maximum temperature at which the biological reaction stops. […] a 10°C temperature increase doubles muscle performance. By contrast, circadian rhythms exhibit a Q10 close to 1 […] Clocks without temperature compensation are useless. […] Although we know that circadian clocks show temperature compensation, and that this phenomenon is a conserved feature across all circadian rhythms, we have little idea how this is achieved.”

“The systematic study of circadian rhythms only really started in the 1950s, and the pioneering studies of Colin Pittendrigh brought coherence to this emerging new discipline. […] From [a] mass of emerging data, Pittendrigh had key insights and defined the essential properties of circadian rhythms across all life. Namely that: all circadian rhythms are endogenous and show near 24-hour rhythms in a biological process (biochemistry, physiology, or behaviour); they persist under constant conditions for several cycles; they are entrained to the astronomical day via synchronizing zeitgebers; and they show temperature compensation such that the period of the oscillation does not alter appreciably with changes in environmental temperature. Much of the research since the 1950s has been the translation of these formalisms into biological structures and processes, addressing such questions as: What is the clock and where is it located within the intracellular processes of the cell? How can a set of biochemical reactions produce a regular self-sustaining rhythm that persists under constant conditions and has a period of about 24 hours? How is this internal oscillation synchronized by zeitgebers such as light to the astronomical day? Why is the clock not altered by temperature, speeding up when the environment gets hotter and slowing down in the cold? How is the information of the near 24-hour rhythm communicated to the rest of the organism?”

“There have been hundreds of studies showing that a broad range of activities, both physical and cognitive, vary across the 24-hour day: tooth pain is lowest in the morning; proofreading is best performed in the evening; labour pains usually begin at night and most natural births occur in the early morning hours. The accuracy of short and long badminton serves is higher in the afternoon than in the morning and evening. Accuracy of first serves in tennis is better in the morning and afternoon than in the evening, although speed is higher in the evening than in the morning. Swimming velocity over 50 metres is higher in the evening than in the morning and afternoon. […] The majority of studies report that performance increases from morning to afternoon or evening. […] Typical ‘optimal’ times of day for physical or cognitive activity are gathered routinely from population studies […]. However, there is considerable individual variation. Peak performance will depend upon age, chronotype, time zone, and for behavioural tasks how many hours the participant has been awake when conducting the task, and even the nature of the task itself. As a general rule, the circadian modulation of cognitive functioning results in an improved performance over the day for younger adults, while in older subjects it deteriorates. […] On average the circadian rhythms of an individual in their late teens will be delayed by around two hours compared with an individual in their fifties. As a result the average teenager experiences considerable social jet lag, and asking a teenager to get up at 07.00 in the morning is the equivalent of asking a 50-year-old to get up at 05.00 in the morning.”

“Day versus night variations in blood pressure and heart rate are among the best-known circadian rhythms of physiology. In humans, there is a 24-hour variation in blood pressure with a sharp rise before awakening […]. Many cardiovascular events, such as sudden cardiac death, myocardial infarction, and stroke, display diurnal variations with an increased incidence between 06.00 and 12.00 in the morning. Both atrial and ventricular arrhythmias appear to exhibit circadian patterning as well, with a higher frequency during the day than at night. […] Myocardial infarction (MI) is two to three times more frequent in the morning than at night. In the early morning, the increased systolic blood pressure and heart rate results in an increased energy and oxygen demand by the heart, while the vascular tone of the coronary artery rises in the morning, resulting in a decreased coronary blood flow and oxygen supply. This mismatch between supply and demand underpins the high frequency of onset of MI. Plaque blockages are more likely to occur in the morning as platelet surface activation markers have a circadian pattern producing a peak of thrombus formation and platelet aggregation. The resulting hypercoagulability partially underlies the morning onset of MI.”

“A critical area where time of day matters to the individual is the optimum time to take medication, a branch of medicine that has been termed ‘chronotherapy’. Statins are a family of cholesterol-lowering drugs which inhibit HMGCR-reductase […] HMGCR is under circadian control and is highest at night. Hence those statins with a short half-life, such as simvastatin and lovastatin, are most effective when taken before bedtime. In another clinical domain entirely, recent studies have shown that anti-flu vaccinations given in the morning provoke a stronger immune response than those given in the afternoon. The idea of using chronotherapy to improve the efficacy of anti-cancer drugs has been around for the best part of 30 years. […] In experimental models more than thirty anti-cancer drugs have been found to vary in toxicity and efficacy by as much as 50 per cent as a function of time of administration. Although Lévi and others have shown the advantages to treating individual patients by different timing regimes, few hospitals have taken it up. One reason is that the best time to apply many of these treatments is late in the day or during the night, precisely when most hospitals lack the infrastructure and personnel to deliver such treatments.”

“Flying across multiple time zones and shift work has significant economic benefits, but the costs in terms of ill health are only now becoming clear. Sleep and circadian rhythm disruption (SCRD) is almost always associated with poor health. […] The impact of jet lag has long been known by elite athletes […] even when superbly fit individuals fly across time zones there is a very prolonged disturbance of circadian-driven rhythmic physiology. […] Horses also suffer from jet lag. […] Even bees can get jet lag. […] The misalignments that occur as a result of the occasional transmeridian flight are transient. Shift working represents a chronic misalignment. […] Nurses are one of the best-studied groups of night shift workers. Years of shift work in these individuals has been associated with a broad range of health problems including type II diabetes, gastrointestinal disorders, and even breast and colorectal cancers. Cancer risk increases with the number of years of shift work, the frequency of rotating work schedules, and the number of hours per week working at night [For people who are interested to know more about this, I previously covered a text devoted exclusively to these topics here and here.]. The correlations are so strong that shift work is now officially classified as ‘probably carcinogenic [Group 2A]’ by the World Health Organization. […] the partners and families of night shift workers need to be aware that mood swings, loss of empathy, and irritability are common features of working at night.”

“There are some seventy sleep disorders recognized by the medical community, of which four have been labelled as ‘circadian rhythm sleep disorders’ […] (1) Advanced sleep phase disorder (ASPD) […] is characterized by difficulty staying awake in the evening and difficulty staying asleep in the morning. Typically individuals go to bed and rise about three or more hours earlier than the societal norm. […] (2) Delayed sleep phase disorder (DSPD) is a far more frequent condition and is characterized by a 3-hour delay or more in sleep onset and offset and is a sleep pattern often found in some adolescents and young adults. […] ASPD and DSPD can be considered as pathological extremes of morning or evening preferences […] (3) Freerunning or non-24-hour sleep/wake rhythms occur in blind individuals who have either had their eyes completely removed or who have no neural connection from the retina to the brain. These people are not only visually blind but are also circadian blind. Because they have no means of detecting the synchronizing light signals they cannot reset their circadian rhythms, which freerun with a period of about 24 hours and 10 minutes. So, after six days, internal time is on average 1 hour behind environmental time. (4) Irregular sleep timing has been observed in individuals who lack a circadian clock as a result of a tumour in their anterior hypothalamus […]. Irregular sleep timing is [also] commonly found in older people suffering from dementia. It is an extremely important condition because one of the major factors in caring for those with dementia is the exhaustion of the carers which is often a consequence of the poor sleep patterns of those for whom they are caring. Various protocols have been attempted in nursing homes using increased light in the day areas and darkness in the bedrooms to try and consolidate sleep. Such approaches have been very successful in some individuals […] Although insomnia is the commonly used term to describe sleep disruption, technically insomnia is not a ‘circadian rhythm sleep disorder’ but rather a general term used to describe irregular or disrupted sleep. […] Insomnia is described as a ‘psychophysiological’ condition, in which mental and behavioural factors play predisposing, precipitating, and perpetuating roles. The factors include anxiety about sleep, maladaptive sleep habits, and the possibility of an underlying vulnerability in the sleep-regulating mechanism. […] Even normal ‘healthy ageing’ is associated with both circadian rhythm sleep disorders and insomnia. Both the generation and regulation of circadian rhythms have been shown to become less robust with age, with blunted amplitudes and abnormal phasing of key physiological processes such as core body temperature, metabolic processes, and hormone release. Part of the explanation may relate to a reduced light signal to the clock […]. In the elderly, the photoreceptors of the eye are often exposed to less light because of the development of cataracts and other age-related eye disease. Both these factors have been correlated with increased SCRD.”

“Circadian rhythm research has mushroomed in the past twenty years, and has provided a much greater understanding of the impact of both imposed and illness-related SCRD. We now appreciate that our increasingly 24/7 society and social disregard for biological time is having a major impact upon our health. Understanding has also been gained about the relationship between SCRD and a spectrum of different illnesses. SCRD in illness is not simply the inconvenience of being unable to sleep at an appropriate time but is an agent that exacerbates or causes serious health problems.”

Links:

Circadian rhythm.
Acrophase.
Phase (waves). Phase angle.
Jean-Jacques d’Ortous de Mairan.
Heliotropism.
Kymograph.
John Harrison.
Munich Chronotype Questionnaire.
Chronotype.
Seasonal affective disorder. Light therapy.
Parkinson’s disease. Multiple sclerosis.
Melatonin.

August 25, 2018 Posted by | Biology, Books, Cancer/oncology, Cardiology, Medicine | Leave a comment

Combinatorics (II)

I really liked this book. Below I have added some links and quotes related to the second half of the book’s coverage.

“An n × n magic square, or a magic square of order n, is a square array of numbers — usually (but not necessarily) the numbers from 1 to n2 — arranged in such a way that the sum of the numbers in each of the n rows, each of the n columns, or each of the two main diagonals is the same. A semi-magic square is a square array in which the sum of the numbers in each row or column, but not necessarily the diagonals, is the same. We note that if the entries are 1 to n2, then the sum of the numbers in the whole array is
1 + 2 + 3 + … + n2n2 (n2 + 1) / 2
on summing the arithmetic progression. Because the n rows and columns have the same ‘magic sum’, the numbers in each single row or column add up to (1/n)th of this, which is n (n2+1) / 2 […] An nn latin squareor a latin square of order n, is a square array with n symbols arranged so that each symbol appears just once in each row and column. […] Given a latin square, we can obtain others by rearranging the rows or the columns, or by permuting the symbols. For an n × n latin square with symbols 1, 2, … , n, we can thereby arrange that the numbers in the first row and the first column appear in order as 1, 2, … , n. Such a latin square is called normalized […] A familiar form of latin square is the sudoku puzzle […] How many n x n latin squares are there for a given order of n? The answer is known only for n ≤ 11. […] The number of normalized latin squares of order 11 has an impressive forty-eight digits.”

“A particular type of latin square is the cyclic square, where the symbols appear in the same cyclic order, moving one place to the left in each successive row, so that the entry at the beginning of each line appears at the end of the next one […] An extension of this idea is where the symbols move more places to the left in each successive row […] We can construct a latin square row by row from its first row, always taking care that no symbol appears twice in any column. […] An important concept […] is that of a set of orthogonal latin squares […] two n × n latin squares are orthogonal if, when superimposed, each of the n2 possible pairings of a symbol from each square appears exactly once. […] pairs of orthogonal latin squares are […] used in agricultural experiments. […] We can extend the idea of orthogonality beyond pairs […] A set of mutually orthogonal latin squares (sometimes abbreviated to MOLS) is a set of latin squares, any two of which are orthogonal […] Note that there can be at most n-1 MOLS of order n. […] A full set of MOLS is called a complete set […] We can ask the following question: For which values of n does there exist a complete set of n × n mutually orthogonal latin squares? As several authors have shown, a complete set exists whenever n is a prime number (other than 2) or a power of a prime […] In 1922, H. L. MacNeish generalized this result by observing that if n has prime factorization p, then the number of MOLS is at least min (p1a x p2b, … , pkz) – 1″.

“Consider the following [problem] involving comparisons between a number of varieties of a commodity: A consumer organization wishes to compare seven brands of detergent and arranges a number of tests. But since it may be uneconomic or inconvenient for each tester to compare all seven brands it is decided that each tester should compare just three brands. How should the trials be organized if each brand is to be tested the same number of times and each pair of brands is to be compared directly? […] A block design consists of a set of v varieties arranged into b blocks. […] [if we] further assume that each block contains the same number k of varieties, and each variety appears in the same number r of blocks […] [the design is] called [an] equireplicate design […] for every block design we have v x r = b x k. […] It would clearly be preferable if all pairs of varieties in a design were compared the same number of times […]. Such a design is called balanced, or a balanced incomplete-block design (often abbreviated to BIBD). The number of times that any two varieties are compared is usually denoted by λ […] In a balanced block design the parameters v, b, k, r, and λ are not independent […] [Rather it is the case that:] r x (k -1) = λ x (v – 1). […] The conditions v x r = b x k and r x (k -1) = λ x (v – 1) are both necessary for a design to be balanced, but they’re not sufficient since there are designs satisfying both conditions which are not balanced. Another necessary condition for a design to be balanced is v ≤ b, a result known as Fisher’s inequality […] A balanced design for which v = b, and therefore k = r, is called a symmetric design“.

“A block design with v varieties is resolvable if its blocks can be rearranged into subdesigns, called replicates, each of which contains every variety just once. [….] we define a finite projective plane to be an arrangement of a finite number of points and a finite number of lines with the properties that: [i] Any two points lie on exactly one line. [ii] Any two lines pass through exactly one point.
Note that this differs from our usual Euclidean geometry, where any two lines pass through exactly one point unless they’re parallel. Omitting these italicized words produces a completely different type of geometry from the one we’re used to, since there’s now a ‘duality’ or symmetry between points and lines, according to which any statement about points lying on lines gives rise to a statement about lines passing through points, and vice versa. […] We say that the finite projective plane has order n if each line contains n + 1 points. […] removing a single line from a projective plane of order n, and the n + 1 points on this line, gives a square pattern with n2 points and n2 + n lines where each line contains n points and each point lies on n + 1 lines. Such a diagram is called an affine plane of order n. […] This process is reversible. If we start with an affine plane of order n and add another line joined up appropriately, we get a projective plane of order n. […] Every finite projective plane gives rise to a symmetric balanced design. […] In general, a finite projective plane of order n, with n2 + n + 1 points and lines and with n + 1 points on each line and n + 1 lines through each point, gives rise to a balanced symmetric design with parameters v = b = n2 + n + 1, k = r = n + 1, and λ = 1. […] Every finite affine plane gives rise to a resolvable design. […] In general, an affine plane of order n, obtained by removing a line and n + 1 points from a projective plane of order n, gives rise to a resolvable design with parameters v = n2 , b = n2 + n , k = n , and r = n + 1. […] Every finite affine plane corresponds to a complete set of orthogonal latin squares.”

Links:

Regular polygon.
Polyhedron.
Internal and external angles.
Triangular tiling. Square tiling. Hexagonal tiling.
Semiregular tessellations.
Penrose tiling.
Platonic solid.
Euler’s polyhedron formula.
Prism (geometry). Antiprism.
Fullerene.
Geodesic dome.
Graph theory.
Complete graph. Complete bipartite graph. Cycle graph.
Degree (graph theory).
Handshaking lemma.
Ramsey theory.
Tree (graph theory).
Eulerian and Hamiltonian Graphs. Hamiltonian path.
Icosian game.
Knight’s tour problem.
Planar graph. Euler’s formula for plane graphs.
Kuratowski’s theorem.
Dual graph.
Lo Shu Square.
Melencolia I.
Euler’s Thirty-six officers problem.
Steiner triple system.
Partition (number theory).
Pentagonal number. Pentagonal number theorem.
Ramanujan’s congruences.

August 23, 2018 Posted by | Books, Mathematics, Statistics | Leave a comment

Personal Relationships… (III)

Some more observations from the book below:

Early research on team processes […] noted that for teams to be effective members must minimize “process losses” and maximize “process gains” — that is, identify ways the team can collectively perform at a level that exceeds the average potential of individual members. To do so, teams need to minimize interpersonal disruptions and maximize interpersonal facilitation among its members […] the prevailing view — backed by empirical findings […] — is that positive social exchanges lead to positive outcomes in teams, whereas negative social exchanges lead to negative outcomes in teams. However, this view may be challenged, in that positive exchanges can sometime lead to negative outcomes, whereas negative exchanges may sometime lead to positive outcomes. For example, research on groupthink (Janis, 1972) suggests that highly cohesive groups can make suboptimal decisions. That is, cohesion […] can lead to suboptimal group performance. As another example, under certain circumstances, negative behavior (e.g., verbal attacks or sabotage directed at another member) by one person in the team could lead to a series of positive exchanges in the team. Such subsequent positive exchanges may involve stronger bonding among other members in support of the targeted member, enforcement of more positive and cordial behavioral norms among members, or resolution of possible conflict between members that might have led to this particular negative exchange.”

“[T]here is […] clear merit in considering social exchanges in teams from a social network perspective. Doing so requires the integration of dyadic-level processes with team-level processes. Specifically, to capture the extent to which certain forms of social exchange networks in teams are formed (e.g., friendship, instrumental, or rather adversary ties), researchers must first consider the dyadic exchanges or ties between all members in the team. Doing so can help researchers identify the extent to which certain forms of ties or other social exchanges are dense in the team […] An important question […] is whether the level of social exchange density in the team might moderate the effects of social exchanges, much like social exchanges strength might strengthen the effects of social exchanges […]. For example, might teams with denser social support networks be able to better handle negative social exchanges in the team when such exchanges emerge? […] the effects of differences in centrality and subgroupings or fault lines may vary, depending on certain factors. Specifically, being more central within the team’s network of social exchange may mean that the more central member receives more support from more members, or, rather, that the more central member is engaged in more negative social exchanges with more members. Likewise, subgroupings or fault lines in the team may lead to negative consequences when they are associated with lack of critical communication among members but not when they reflect the correct form of communication network […] social exchange constructs are likely to exert stronger influences on individual team members when exchanges are more highly shared (and reflected in more dense networks). By the same token, individuals are more likely to react to social exchanges in their team when exchanges are directed at them from more team members”.

“[C]ustomer relationship management (CRM) has garnered growing interest from both research and practice communities in marketing. The purpose of CRM is “to efficiently and effectively increase the acquisition and retention of profitable customers by selectively initiating, building and maintaining appropriate relationships with them” […] Research has shown that successfully implemented CRM programs result in positive outcomes. In a recent meta-analysis, Palmatier, Dant, Grewal, and Evans (2006) found that investments in relationship marketing have a large, direct effect on seller objective performance. In addition, there has been ample research demonstrating that the effects of relationship marketing on outcomes are mediated by relational constructs that include trust […] and commitment […]. Combining these individual predictors by examining the effects of the global construct of relationship quality is also predictive of positive firm performance […] Meta-analytic findings suggest that CRM is more effective when relationships are built with an individual person rather than a selling firm […] Gutek (1995) proposed a typology of service delivery relationships with customers: encounters, pseudo-relationships, and relationships. […] service encounters usually consist of a solitary interaction between a customer and a service employee, with the expectation that they will not interact in the future. […] in a service encounter, customers do not identify with either the individual service employee with whom they interact or with the service organization. […] An alternate to the service encounter relationship is the pseudorelationship, which arises when a customer interacts with different individual service employees but usually (if not always) from the same service organization […] in pseudo-relationships, the customer identifies with the service of a particular service organization, not with an individual service employee. Finally, personal service relationships emerge when customers have repeated interactions with the same individual service provider […] We argue that the nature of these different types of service relationships […] will influence the types and levels of resources exchanged between the customer and the employee during the service interaction, which may further affect customer and employee outcomes from the service interaction.”

“According to social exchange theory, individuals form relationships and engage in social interactions as a means of obtaining needed resources […]. Within a social exchange relationship, individuals may exchange a variety of resources, both tangible and intangible. In the study of exchange relationships, the content of the exchange, or what resources are being exchanged, is often used as an indicator of the quality of the relationship. On the one hand, the greater the quality of resources exchanged, the better the quality of the relationship; on the other hand, the better the relationship, the more likely these resources are exchanged. Therefore, it is important to understand the specific resources exchanged between the service provider and the customer […] Ferris and colleagues (2009) proposed that several elements of a relationship develop because of social exchange: trust, respect, affect, and support. In an interaction between a service provider and a customer, most of the resources that are exchanged are non-economic in nature […]. Examples include smiling, making eye contact, and speaking in a rhythmic (non-monotone) vocal tone […]. Through these gestures, the service provider and the customer may demonstrate a positive affect toward each other. In addition, greeting courteously, listening attentively to customers, and providing assistance to address customer needs may show the service provider’s respect and support to the customer; likewise, providing necessary information, clarifying their needs and expectations, cooperating with the service provider by following proper instructions, and showing gratitude to the service provider may indicate customers’ respect and support to the service provider. Further, through placing confidence in the fairness and honesty of the customer and accuracy of the information the customer provides, the service provider offers the customer his or her trust; similarly, through placing confidence in the expertise and good intentions of the service provider, the customer offers his or her trust in the service provider’s competence and integrity. Some of the resources exchanged, particularly special treatment, between a service provider and a customer are of both economic and social value. For example, the customer may receive special discounts or priority service, which not only offers the customer economic benefits but also shows how much the service provider values and supports the customer. Similarly, a service provider who receives an extra big tip from a customer is not only better off economically but also gains a sense of recognition and esteem. The more these social resources of trust, respect, affect, and support, as well as special treatment, are mutually exchanged in the provider–customer interactions, the higher the quality of the service interaction for both parties involved. […] we argue that the potential for the exchange of resources […] depends on the nature of the service relationship. In other words, the quantity and quality of resources exchanged in discrete service encounters, pseudo-relationships, and personal service relationships are distinct.”

Though customer–employee exchanges can be highly rewarding for both parties, they can also “turn ugly,” […]. In fact, though negative interactions such as rudeness, verbal abuse, or harassment are rare, employees are more likely to report them from customers than from coworkers or supervisors […] customer–employee exchanges are more likely to involve negative treatment than exchanges with organizational insiders. […] Such negative exchanges result in emotional labor and employee burnout […], covert sabotage of services or goods, or, in atypical cases […] direct retaliation and withdrawal […] Employee–customer exchanges are characterized by a strong power differential […] customers can influence the employees’ desired resources, have more choice over whether to continue the relationship, and can act in negative ways with few consequences (Yagil, 2008) […] One common way to conceptualize the impact of negative customer–employee interactions is Hirschman’s (1970) Exit-Voice-loyalty model. Management can learn of customers’ dissatisfaction by their reduced loyalty, voice, or exit. […] Customers rarely, if ever, see themselves as the source of the problem; in contrast, employees are highly likely to see customers as the reason for a negative exchange […] when employees feel customers’ allocation of resources (e.g., tips, purchases) are not commensurate with the time or energy expended (i.e., distributive injustice) or interpersonal treatment of employees is unjustified or violates norms (i.e., interactional injustice), they feel anger and anxiety […] Given these strong emotional responses, emotional deviance is a possible outcome in the service exchange. Emotional deviance is when employees violate display rules by expressing their negative feelings […] To avoid emotional deviance, service providers engage in emotion regulation […]. In lab and field settings, perceived customer mistreatment is linked to “emotional labor,” specifically regulating emotions by faking or suppressing emotions […] Customer mistreatment — incivility as well as verbal abuse — is well linked to employee burnout, and this effect exists beyond other job stressors (e.g., time pressure, constraints) and beyond mistreatment from supervisors and coworkers”.

Though a customer may complain or yell at an employee in hopes of improving service, most evidence suggests the opposite occurs. First, service providers tend to withdraw from negative or deviant customers (e.g., avoiding eye contact or going to the back room[)] […] Engaging in withdrawal or other counterproductive work behaviors (CWBs) in response to mistreatment can actually reduce burnout […], but the behavior is likely to create another dissatisfied customer or two in the meantime. Second, mistreatment can also result in the employees reduced task performance in the service exchange. Stressful work events redirect attention toward sense making, even when mistreatment is fairly ambiguous or mild […] and thus reduce cognitive performance […]. Regulating those negative emotions also requires attentional resources, and both surface and deep acting reduce memory recall compared with expressing felt emotions […] Moreover, the more that service providers feel exhausted and burned out, the less positive their interpersonal performance […] Finally, perceived incivility or aggressive treatment from customers, and the resulting job dissatisfaction, is a key predictor of intentional customer-directed deviant behavior or service sabotage […] Dissatisfied employees engage in less extra-effort behavior than satisfied employees […]. More insidious, they may engage in intentionally deviant performance that is likely to be covert […] and thus difficult to detect and manage […] Examples of service sabotage include intentionally giving the customer faulty or damaged goods, slowing down service pace, or making “mistakes” in the service transaction, all of which are then linked to lower service performance from the customers’ perspective […]. This creates a feedback loop from employee behaviors to customer perceptions […] Typical human resource practices can help service management […], and practices such as good selection and providing training should reduce the likelihood of service failures and the resulting negative reactions from customers […]. Support from colleagues can help buffer the reactions to customer-instigated mistreatment. Individual perceptions of social support moderate the strain from emotional labor […], and formal interventions increasing individual or unit-level social support reduce strain from emotionally demanding interactions with the public (Le Blanc, Hox, Schaufeli, & Taris, 2007).”

August 19, 2018 Posted by | Books, Psychology | Leave a comment

Some observations on a cryptographic problem

It’s been a long time since I last posted one of these sort of ‘rootless’ posts which are not based on a specific book or a specific lecture or something along those lines, but a question on r/science made me think about these topics and start writing a bit about it, and I decided I might as well add my thoughts and ideas here.

The reddit question which motivated me to write this post was this one: “Is it difficult to determine the password for an encryption if you are given both the encrypted and unencrypted message?

By “difficult” I mean requiring an inordinate amount of computation. If given both an encrypted and unencrypted file/message, is it reasonable to be able to recover the password that was used to encrypt the file/message?”

Judging from the way the question is worded, the inquirer obviously knows very little about these topics, but that was part of what motivated me when I started out writing; s/he quite obviously has a faulty model of how this kind of stuff actually works, and just by virtue of the way he or she asks his/her question s/he illustrates some ways in which s/he gets things wrong.

When I decided to transfer my efforts towards discussing these topics to the blog I also implicitly decided against using language that would be expected to be easily comprehensible for the original inquirer, as s/he was no longer in the target group and there’s a cost to using that kind of language when discussing technical matters. I’ve sort of tried to make this post both useful and readable to people not all that familiar with the related fields, but I tend to find it difficult to evaluate the extent to which I’ve succeeded when I try to do things like that.

I decided against adding stuff already commented on when I started out writing this, so I’ll not e.g. repeat noiwontfixyourpc’s reply below. However I have added some other observations that seem to me to be relevant and worth mentioning to people who might consider asking a similar question to the one the original inquirer asked in that thread:

i. Finding a way to make plaintext turn into cipher text (…or cipher text into plaintext; and no, these two things are not actually always equivalent, see below…) is a very different (and in many contexts a much easier problem) than finding out the actual encryption scheme that is at work producing the text strings you observe. There can be many, many different ways to go from a specific sample of plaintext to a specific sample of ciphertext, and most of the solutions won’t work if you’re faced with a new piece of ciphertext; especially not if the original samples are small, so only a small amount of (potential) information would be expected to be included in the text strings.

If you only get a small amount of plaintext and corresponding cipher text you may decide that algorithm A is the one that was applied to the message, even if the algorithm actually applied was a more complex algorithm, B. To illustrate in a very simple way how this might happen, A might be a particular case of B, because B is a superset of A and a large number of other potential encryption algorithms applied in the encryption scheme B (…or the encryption scheme C, because B also happens to be a subset of C, or… etc.). In such a context A might be an encryption scheme/approach that perhaps only applies in very specific contexts; for example (part of) the coding algorithm might have been to decide that ‘on next Tuesday, we’ll use this specific algorithm to translate plaintext into cipher text, and we’ll never use that specific translation-/mapping algorithm (which may be but one component of the encryption algorithm) again’. If such a situation applies then you’re faced with the problem that even if your rule ‘worked’ in that particular instance, in terms of translating your plaintext into cipher text and vice versa, it only ‘worked’ because you blindly fitted the two data-sets in a way that looked right, even if you actually had no idea how the coding scheme really worked (you only guessed A, not B, and in this particular instance A’s never actually going to happen again).

On a more general level some of the above comments incidentally in my view quite obviously links to results from classical statistics; there are many ways to link random variables through data fitting methods, but reliably identifying proper causal linkages through the application of such approaches is, well, difficult (and, according to some, often ill-advised)…

ii. In my view, it does not seem possible in general to prove that any specific proposed encryption/decryption algorithm is ‘the correct one’. This is because the proposed algorithm will never be a unique solution to the problem you’re evaluating. How are you going to convince me that The True Algorithm is not a more general/complex one (or perhaps a completely different one – see iii. below) than the one you propose, and that your solution is not missing relevant variables? The only way to truly test if the proposed algorithm is a valid algorithm is to test it on new data and compare its performance on this new data set with the performances of competing variables/solution proposals which also managed to correctly link cipher text and plaintext. If the algorithm doesn’t work on the new data, you got it wrong. If it does work on new data, well, you might still just have been lucky. You might get more confident with more correctly-assessed (…guessed?) data, but you never get certain. In other similar contexts a not uncommon approach for trying to get around these sorts of problems is to limit the analysis to a subset of the data available in order to obtain the algorithm, and then using the rest of the data for validation purposes (here’s a relevant link), but here even with highly efficient estimation approaches you almost certainly will run out of information (/degrees of freedom) long before you get anywhere if the encryption algorithm is at all non-trivial. In these settings information is likely to be a limiting resource.

iii. There are many different types of encryption schemes, and people who ask questions like the one above tend, I believe, to have a quite limited view of which methods and approaches are truly available to one who desires secrecy when exchanging information with others. Imagine a situation where the plaintext is ‘See you next Wednesday’ and the encrypted text is an English translation of Tolstoy’s book War and Peace (or, to make it even more fun, all pages published on the English version of Wikipedia, say on November the 5th, 2017 at midnight GMT). That’s an available encryption approach that might be applied. It might be a part (‘A’) of a more general (‘B’) encryption approach of linking specific messages from a preconceived list of messages, which had been considered worth sending in the future when the algorithm was chosen, to specific book titles decided on in advance. So if you want to say ‘good Sunday!’, Eve gets to read the Bible and see where that gets her. You could also decide that in half of all cases the book cipher text links to specific messages from a list but in the other half of the cases what you actually mean to communicate is on page 21 of the book; this might throw a hacker who saw a combined cipher text and plaintext combination resulting from that part of the algorithm off in terms of the other half, and vice versa – and it illustrates well one of the key problems you’re faced with as an attacker when working on cryptographic schemes about which you have limited knowledge; the opponent can always add new layers on top of the ones that already exist/apply to make the problem harder to solve. And so you could also link the specific list message with some really complicated cipher-encrypted version of the Bible. There’s a lot more to encryption schemes than just exchanging a few letters here and there. On related topics, see this link. On a different if related topic, people who desire secrecy when exchanging information may also attempt to try to hide the fact that any secrets are exchanged in the first place. See also this.

iv. The specific usage of the word ‘password’ in the original query calls for comment for multiple reasons, some of which have been touched upon above, perhaps mainly because it implicitly betrays a lack of knowledge about how modern cryptographic systems actually work. The thing is, even if you might consider an encryption scheme to just be an advanced sort of ‘password’, finding the password (singular) is not always the task you’re faced with today. In symmetric-key algorithm settings you might sort-of-kind-of argue that it sort-of is – in such settings you might say that you have one single (collection of) key(s) which you use to encrypt messages and also use to decrypt the messages. So you can both encrypt and decrypt the message using the same key(s), and so you only have one ‘password’. That’s however not how asymmetric-key encryption works. As wiki puts it: “In an asymmetric key encryption scheme, anyone can encrypt messages using the public key, but only the holder of the paired private key can decrypt.”

This of course relates to what you actually want to do/achieve when you get your samples of cipher text and plaintext. In some cryptographic contexts by design the route you need to to go to get from cipher text to plaintext is conceptually different from the route you need to go to get from plaintext to cipher text. And some of the ‘passwords’ that relate to how the schemes work are public knowledge by design.

v. I have already touched a bit upon the problem of the existence of an information constraint, but I realized I probably need to spell this out in a bit more detail. The original inquirer to me seems implicitly to be under the misapprehension that computational complexity is the only limiting constraint here (“By “difficult” I mean requiring an inordinate amount of computation.”). Given the setting he or she proposes, I don’t think that’s true, and why that is is sort of interesting.

If you think about what kind of problem you’re facing, what you have here in this setting is really a very limited amount of data which relates in an unknown manner to an unknown data-generating process (‘algorithm’). There are, as has been touched upon, in general many ways to obtain linkage between two data sets (the cipher text and the plaintext) using an algorithm – too many ways for comfort, actually. The search space is large, there are too many algorithms to consider; or equivalently, the amount of information supplied by the data will often be too small for us to properly evaluate the algorithms under consideration. An important observation is that more complex algorithms will both take longer to calculate (‘identify’ …at least as candidates) and be expected to require more data to evaluate, at least to the extent that algorithmic complexity constrains the data (/relates to changes in data structure/composition that needs to be modeled in order to evaluate/identify the goal algorithm). If the algorithm says a different encryption rule is at work on Wednesdays, you’re going to have trouble figuring that out if you only got hold of a cipher text/plaintext combination derived from an exchange which took place on a Saturday. There are methods from statistics that might conceivably help you deal with problems like these, but they have their own issues and trade-offs. You might limit yourself to considering only settings where you have access to all known plaintext and cipher text combinations, so you got both Wednesday and Saturday, but even here you can’t be safe – next (metaphorical, I probably at this point need to add) Friday might be different from last (metaphorical) Friday, and this could even be baked into the algorithm in very non-obvious ways.

The above remarks might give you the idea that I’m just coming up with these kinds of suggestions to try to foil your approaches to figuring out the algorithm ‘by cheating’ (…it shouldn’t matter whether or not it was ‘sent on a Saturday’), but the main point is that a complex encryption algorithm is complex, and even if you see it applied multiple times you might not get enough information about how it works from the data suggested to be able to evaluate if you guessed right. In fact, given a combination of a sparse data set (one message, or just a few messages, in plaintext and cipher text) and a complex algorithm involving a very non-obvious mapping function, the odds are strongly against you.

vi. I had the thought that one reason why the inquirer might be confused about some of these things is that s/he might well be aware of the existence of modern cryptographic techniques which do rely to a significant extent on computational complexity aspects. I.e., here you do have settings where you’re asked to provide ‘the right answer’ (‘the password’), but it’s hard to calculate the right answer in a reasonable amount of time unless you have the relevant (private) information at hand – see e.g. these links for more. One way to think about how such a problem relates to the other problem at hand (you have been presented with samples of cipher text and plaintext and you want to guess all the details about how the encryption and decryption schemes which were applied work) is that this kind of algorithm/approach may be applied in combination with other algorithmic approaches to encrypt/decrypt the text you’re analyzing. A really tough prime factorization problem might for all we know be an embedded component of the cryptographic process that is applied to our text. We could call it A.

In such a situation we would definitely be in trouble because stuff like prime factorization is really hard and computationally complex, and to make matters worse just looking at the plaintext and the cipher text would not make it obvious to us that a prime factorization scheme had even been applied to the data. But a really important point is that even if such a tough problem was not present and even if only relatively less computationally demanding problems were involved, we almost certainly still just wouldn’t have enough information to break any semi-decent encryption algorithm based on a small sample of plaintext and cipher text. It might help a little bit, but in the setting contemplated by the inquirer a ‘faster computer’ (/…’more efficient decision algorithm’, etc.) can only help so much.

vii. Shannon and Kerckhoffs may have a point in a general setting, but in specific settings like this particular one I think it is well worth taking into account the implications of not having a (publicly) known algorithm to attack. As wiki notes (see the previous link), ‘Many ciphers are actually based on publicly known algorithms or are open source and so it is only the difficulty of obtaining the key that determines security of the system’. The above remarks were of course all based on an assumption that Eve does not here have the sort of knowledge about the encryption scheme applied that she in many cases today actually might have. There are obvious and well-known weaknesses associated with having security-associated components of a specific cryptographic scheme be independent of the key, but I do not see how it does not in this particular setting cause search space blow-up making the decision problem (did we actually guess right?) intractable in many cases. A key feature of the problem considered by the inquirer is that you here – unlike in many ‘guess the password-settings’ where for example a correct password will allow you access to an application or a document or whatever – do not get any feedback neither in the case where you guess right nor in the case where you guess wrong; it’s a decision problem, not a calculation problem. (However it is perhaps worth noting on the other hand that in a ‘standard guess-the-password-problem’ you may also sometimes implicitly face a similar decision problem due to e.g. the potential for a combination of cryptographic security and steganographic complementary strategies like e.g. these having been applied).

August 14, 2018 Posted by | Computer science, Cryptography, Data, rambling nonsense, Statistics | Leave a comment

Personal Relationships… (II)

Some more observations from the book below:

Coworker support, or the processes by which coworkers provide assistance with tasks, information, or empathy, has long been considered an important construct in the stress and strain literature […] Social support fits the conservation of resources theory definition of a resource, and it is commonly viewed in that light […]. Support from coworkers helps employees meet the demands of their job, thus making strain less likely […]. In a sense, social support is the currency upon which social exchanges are based. […] The personality of coworkers can play an important role in the development of positive coworker relationships. For example, there is ample evidence that suggests that those higher in conscientiousness and agreeableness are more likely to help coworkers […] Further, similarity in personality between coworkers (e.g., coworkers who are similar in their conscientiousness) draws coworkers together into closer relationships […] cross-sex relationships appear to be managed in a different manner than same-sex relationships. […] members of cross-sex friendships fear the misinterpretation of their relationship by those outside the relationship as a sexual relationship rather than platonic […] a key goal of partners in a cross-sex workplace friendship becomes convincing “third parties that the friendship is authentic.” As a result, cross-sex workplace friends will intentionally limit the intimacy of their communication or limit their non-work-related communication to situations perceived to demonstrate a nonsexual relationship, such as socializing with a cross-sex friend only in the presence of his or her spouse […] demographic dissimilarity in age and race can reduce the likelihood of positive coworker relationships. Chattopadhyay (1999) found that greater dissimilarity among group members on age and race were associated with less collegial relationships among coworkers, which was subsequently associated with less altruistic behavior […] Sias and Cahill (1998) found that a variety of situational characteristics, both inside and outside the workplace setting, helps to predict the development of workplace friendship. For example, they found that factors outside the workplace, such as shared outside interests (e.g., similar hobbies), life events (e.g., having a child), and the simple passing of time can lead to a greater likelihood of a friendship developing. Moreover, internal workplace characteristics, including working together on tasks, physical proximity within the office, a common problem or enemy, and significant amounts of “downtime” that allow for greater socialization, also support friendship development in the workplace (see also Fine, 1986).”

“To build knowledge, employees need to be willing to learn and try new things. Positive relationships are associated with a higher willingness to engage in learning and experimentation […] and, importantly, sharing of that new knowledge to benefit others […] Knowledge sharing is dependent on high-quality communication between relational partners […] Positive relationships are characterized by less defensive communication when relational partners provide feedback (e.g., a suggestion for a better way to accomplish a task; Roberts, 2007). In a coworker context, this would involve accepting help from coworkers without putting up barriers to that help (e.g., nonverbal cues that the help is not appreciated or welcome). […] A recent meta-analysis by Chiaburu and Harrison (2008) found that coworker support was associated with higher performance and higher organizational citizenship behavior (both directed at individuals and directed at the organization broadly). These relationships held whether performance was self- or supervisor related […] Chiaburu and Harrison (2008) also found that coworker support was associated with higher satisfaction and organizational commitment […] Positive coworker exchanges are also associated with lower levels of employee withdrawal, including absenteeism, intention to turnover, and actual turnover […]. To some extent, these relationships may result from norms within the workplace, as coworkers help to set standards for behavior and not “being there” for other coworkers, particularly in situations where the work is highly interdependent, may be considered a significant violation of social norms within a positive working environment […] Perhaps not surprisingly, given the proximity and the amount of time spent with coworkers, workplace friendships will occasionally develop into romances and, potentially, marriages. While still small, the literature on married coworkers suggests that they experience a number of benefits, including lower emotional exhaustion […] and more effective coping strategies […] Married coworkers are an interesting population to examine, largely because their work and family roles are so highly integrated […]. As a result, both resources and demands are more likely to spill over between the work and family role for married coworkers […] Janning and Neely (2006) found that married coworkers were more likely to talk about work-related issues while at home than married couples that had no work-related link.”

Negative exchanges [between coworkers] are characterized by behaviors that are generally undesirable, disrespectful, and harmful to the focal employee or employees. Scholars have found that these negative exchanges influence the same outcomes as positive, supporting exchanges, but in opposite directions. For instance, in their recent meta-analysis of 161 independent studies, Chiaburu and Harrison (2008) found that antagonistic coworker exchanges are negatively related to job satisfaction, organizational commitment, and task performance and positively related to absenteeism, intent to quit, turnover, and counterproductive work behaviors. Unfortunately, despite the recent popularity of the negative exchange research, this literature still lacks construct clarity and definitional precision. […] Because these behaviors have generally referred to acts that impact both coworkers and the organization as a whole, much of this work fails to distinguish social interactions targeting specific individuals within the organization from the nonsocial behaviors explicitly targeting the overall organization. This is unfortunate given that coworker-focused actions and organization-focused actions represent unique dimensions of organizational behavior […] negative exchanges are likely to be preceded by certain antecedents. […] Antecedents may stem from characteristics of the enactor, of the target, or of the context in which the behaviors occur. For example, to the extent that enactors are low on socially relevant personality traits such as agreeableness, emotional stability, or extraversion […], they may be more prone to initiate a negative exchange. Likewise, an enactor who is a high Machiavellian may initiate a negative exchange with the goal of gaining power or establishing control over the target. Antagonistic behaviors may also occur as reciprocation for a previous attack (real or imagined) or as a proactive deterrent against a potential future negative behavior from the target. Similarly, enactors may initiate antagonism based on their perceptions of a coworker’s behavioral characteristics such as suboptimal productivity or weak work ethic. […] The reward system can also play a role as an antecedent condition for antagonism. When coworkers are highly interdependent and receive rewards based on the performance of the group as opposed to each individual, the incidence of antagonism may increase when there is substantial variance in performance among coworkers.”

“[E]mpirical evidence suggests that some people have certain traits that make them more vulnerable to coworker attacks. For example, employees with low self-esteem, low emotional stability, high introversion, or high submissiveness are more inclined to be the recipients of negative coworker behaviors […]. Furthermore, research also shows that people who engage in negative behaviors are likely to also become the targets of these behaviors […] Two of the most commonly studied workplace attitudes are employee job satisfaction […] and affective organizational commitment […] Chiaburu and Harrison (2008) linked general coworker antagonism with both attitudes. Further, the specific behaviors of bullying and incivility have also been found to adversely affect both job satisfaction and organizational commitment […]. A variety of behavioral outcomes have also been identified as outcomes of coworker antagonism. Withdrawal behaviors such as absenteeism, intention to quit, turnover, effort reduction […] are typical responses […] those who have been targeted by aggression are more likely to engage in aggression. […] Feelings of anger, fear, and negative mood have also been shown to mediate the effects of interpersonal mistreatment on behaviors such as withdrawal and turnover […] [T]he combination of enactor and target characteristics is likely to play an antecedent role to these exchanges. For instance, research in the diversity area suggests that people tend to be more comfortable around those with whom they are similar and less comfortable around people with whom they are dissimilar […] there may be a greater incidence of coworker antagonism in more highly diverse settings than in settings characterized by less diversity. […] research has suggested that antagonistic behaviors, while harmful to the target or focal employee, may actually be beneficial to the enactor of the exchange. […] Krischer, Penney, and Hunter (2010) recently found that certain types of counterproductive work behaviors targeting the organization may actually provide employees with a coping mechanism that ultimately reduces their level of emotional exhaustion.”

CWB [counterproductive work behaviors] toward others is composed of volitional acts that harm people at work; in our discussion this would refer to coworkers. […] person-oriented organizational citizenship behaviors (OCB; Organ, 1988) consist of behaviors that help others in the workplace. This might include sharing job knowledge with a coworker or helping a coworker who had too much to do […] Social support is often divided into the two forms of emotional support that helps people deal with negative feelings in response to demanding situations versus instrumental support that provides tangible aid in directly dealing with work demands […] one might expect that instrumental social support would be more strongly related to positive exchanges and positive relationships. […] coworker social support […] has [however] been shown to relate to strains (burnout) in a meta-analysis (Halbesleben, 2006). […] Griffin et al. suggested that low levels of the Five Factor Model […] dimensions of agreeableness, emotional stability, and extraversion might all contribute to negative behaviors. Support can be found for the connection between two of these personality characteristics and CWB. […] Berry, Ones, and Sackett (2007) showed in their meta-analysis that person-focused CWB (they used the term deviance) had significant mean correlations of –.20 with emotional stability and –.36 with agreeableness […] there was a significant relationship with conscientiousness (r = –.19). Thus, agreeable, conscientious, and emotionally stable individuals are less likely to engage in CWB directed toward people and would be expected to have fewer negative exchanges and better relationships with coworkers. […] Halbesleben […] suggests that individuals high on the Five Factor Model […] dimensions of agreeableness and conscientiousness would have more positive exchanges because they are more likely to engage in helping behavior. […] a meta-analysis has shown that both of these personality variables relate to the altruism factor of OCB in the direction expected […]. Specifically, the mean correlations of OCB were .13 for agreeableness and .22 for conscientiousness. Thus, individuals high on these two personality dimensions should have more positive coworker exchanges.”

There is a long history of research in social psychology supporting the idea that people tend to be attracted to, bond, and form friendships with others they believe to be similar […], and this is true whether the similarity is rooted in demographics that are fairly easy to observe […] or in attitudes, beliefs, and values that are more difficult to observe […] Social network scholars refer to this phenomenon as homophily, or the notion that “similarity breeds connection” […] although evidence of homophily has been found to exist in many different types of relationships, including marriage, frequency of communication, and career support, it is perhaps most evident in the formation of friendships […] We extend this line of research and propose that, in a team context that provides opportunities for tie formation, greater levels of perceived similarity among team members will be positively associated with the number of friendship ties among team members. […] A chief function of friendship ties is to provide an outlet for individuals to disclose and manage emotions. […] friendship is understood as a form of support that is not related to work tasks directly; rather, it is a “backstage resource” that allows employees to cope with demands by creating distance between them and their work roles […]. Thus, we propose that friendship network ties will be especially important in providing the type of coping resources that should foster team member well-being. Unfortunately, however, friendship network ties negatively impact team members’ ability to focus on their work tasks, and, in turn, this detracts from taskwork. […] When friends discuss nonwork topics, these individuals will be distracted from work tasks and will be exposed to off-task information exchanged in informal relationships that is irrelevant for performing one’s job. Additionally, distractions can hinder individuals’ ability to become completely engaged in their work (Jett & George).”

Although teams are designed to meet important goals for both companies and their employees, not all team members work together well.
Teams are frequently “cruel to their members” […] through a variety of negative team member exchanges (NTMEs) including mobbing, bullying, incivility, social undermining, and sexual harassment. […] Team membership offers identity […], stability, and security — positive feelings that often elevate work teams to powerful positions in employees’ lives […], so that members are acutely aware of how their teammates treat them. […] NTMEs may evoke stronger emotional, attitudinal, and behavioral consequences than negative encounters with nonteam members. In brief, team members who are targeted for NTMEs are likely to experience profound threats to personal identity, security, and stability […] when a team member targets another for negative interpersonal treatment, the target is likely to perceive that the entire group is behind the attack rather than the specific instigator alone […] Studies have found that NTMEs […] are associated with poor psychological outcomes such as depression; undesirable work attitudes such as low affective commitment, job dissatisfaction, and low organization-based self-esteem; and counterproductive behaviors such as deviance, job withdrawal, and unethical behavior […] Some initial evidence has also indicated that perceptions of rejection mediate the effects of NTMEs on target outcomes […] Perceptions of the comparative treatment of other team members are an important factor in reactions to NTMEs […]. When targets perceive they are “singled out,” NTMEs will cause more pronounced effects […] A significant body of literature has suggested that individuals guide their own behaviors through environmental social cues that they glean from observing the norms and values of others. Thus, the negative effects of NTMEs may extend beyond the specific targets; NTMEs can spread contagiously to other team members […]. The more interdependent the social actors in the team setting, the stronger and more salient will be the social cues […] [There] is evidence that as team members see others enacting NTMEs, their inhibitions against such behaviors are lowered.”

August 13, 2018 Posted by | Books, Psychology | Leave a comment

Promoting the unknown…

i.

ii.

iii.

iv.

v.

August 10, 2018 Posted by | Music | Leave a comment

Personal Relationships… (I)

“Across subdisciplines of psychology, research finds that positive, fulfilling, and satisfying relationships contribute to life satisfaction, psychological health, and physical well-being whereas negative, destructive, and unsatisfying relationships have a whole host of detrimental psychological and physical effects. This is because humans possess a fundamental “need to belong” […], characterized by the motivation to form and maintain lasting, positive, and significant relationships with others. The need to belong is fueled by frequent and pleasant relational exchanges with others and thwarted when one feels excluded, rejected, and hurt by others. […] This book uses research and theory on the need to belong as a foundation to explore how five different types of relationships influence employee attitudes, behaviors, and well-being. They include relationships with supervisors, coworkers, team members, customers, and individuals in one’s nonwork life. […] This book is written for a scientist–practitioner audience and targeted to both researchers and human resource management professionals. The contributors highlight both theoretical and practical implications in their respective chapters, with a common emphasis on how to create and sustain an organizational climate that values positive relationships and deters negative interpersonal experiences. Due to the breadth of topics covered in this edited volume, the book is also appropriate for advanced specialty undergraduate or graduate courses on I/O psychology, human resource management, and organizational behavior.”

The kind of stuff covered in books like this one relates closely to social stuff I lack knowledge about and/or is just not very good at handling. I don’t think too highly of this book’s coverage so far, but that’s at least partly due to the kinds of topics covered – it is what it is.

Below I have added some quotes from the first few chapters of the book.

“Work relationships are important to study in that they can exert a strong influence on employees’ attitudes and behaviors […].The research evidence is robust and consistent; positive relational interactions at work are associated with more favorable work attitudes, less work-related strain, and greater well-being (for reviews see Dutton & Ragins, 2007; Grant & Parker, 2009). On the other side of the social ledger, negative relational interactions at work induce greater strain reactions, create negative affective reactions, and reduce well-being […]. The relationship science literature is clear, social connection has a causal effect on individual health and well-being”.

“[One] way to view relationships is to consider the different dimensions by which relationships vary. An array of dimensions that underlie relationships has been proposed […] Affective tone reflects the degree of positive and negative feelings and emotions within the relationship […] Relationships and groups marked by greater positive affective tone convey more enthusiasm, excitement, and elation for each other, while relationships consisting of more negative affective tone express more fear, distress, and scorn. […] Emotional carrying capacity refers to the extent that the relationship can handle the expression of a full range of negative and position emotions as well as the quantity of emotion expressed […]. High-quality relationships have the ability to withstand the expression of more emotion and a greater variety of emotion […] Interdependence involves ongoing chains of mutual influence between two people […]. Degree of relationship interdependency is reflected through frequency, strength, and span of influence. […] A high degree of interdependence is commonly thought to be one of the hallmarks of a close relationship […] Intimacy is composed of two fundamental components: self-disclosure and partner responsiveness […]. Responsiveness involves the extent that relationship partners understand, validate, and care for one another. Disclosure refers to verbal communications of personally relevant information, thoughts, and feelings. Divulging more emotionally charged information of a highly personal nature is associated with greater intimacy […]. Disclosure tends to proceed from the superficial to the more intimate and expands in breadth over time […] Power refers to the degree that dominance shapes the relationship […] relationships marked by a power differential are more likely to involve unidirectional interactions. Equivalent power tends to facilitate bidirectional exchanges […] Tensility is the extent that the relationship can bend and endure strain in the face of challenges and setbacks […]. Relationship tensility contributes to psychological safety within the relationship. […] Trust is the belief that relationship partners can be depended upon and care about their partner’s needs and interests […] Relationships that include a great deal of trust are stronger and more resilient. A breach of trust can be one of the most difficult relationships challenges to overcome (Pratt & dirks, 2007).”

“Relationships are separate entities from the individuals involved in the relationships. The relationship unit (typically a dyad) operates at a different level of analysis from the individual unit. […] For those who conduct research on groups or organizations, it is clear that operations at a group level […] operate at a different level than individual psychology, and it is not merely the aggregate of the individuals involved in the relationship. […] operations at one level (e.g., relationships) can influence behavior at the other level (e.g., individual). […] relationships are best thought of as existing at their own level of analysis, but one that interacts with other levels of analysis, such as individual and group or cultural levels. Relationships cannot be reduced to the actions of the individuals in them or the social structures where they reside but instead interact with the individual and group processes in interesting ways to produce behaviors. […] it is challenging to assess causality via experimental procedures when studying relationships. […] Experimental procedures are crucial for making inferences of causation but are particularly difficult in the case of relationships because it is tough to manipulate many important relationships (e.g., love, marriage, sibling relationships). […] relationships are difficult to observe at the very beginning and at the end, so methods have been developed to facilitate this.”

“[T]he organizational research could […] benefit from the use of theoretical models from the broader relationships literature. […] Interdependence theory is hardly ever seen in organizations. There was some fascinating work in this area a few decades ago, especially in interdependence theory with the investment model […]. This work focused on the precursors of commitment in the workplace and found that, like romantic relationships, the variables of satisfaction, investments, and alternatives played key roles in this process. The result is that when satisfaction and investments are high and alternative opportunities are low, commitment is high. However, it also means that if investments are sufficiently high and alternatives are sufficiently low, then satisfaction can by lowered and commitment will remain high — hence, the investment model is useful for understanding exploitation (Rusbult, Campbell, & Price, 1990).”

“Because they cross formal levels in the organizational hierarchy, supervisory relationships necessarily involve an imbalance in formal power. […] A review by Keltner, Gruenfeld, and Anderson (2003) suggests that power affects how people experience emotions, whether they attend more to rewards or threats, how they process information, and the extent to which they inhibit their behavior around others. The literature clearly suggests that power influences affect, cognition, and behavior in ways that might tend to constrain the formation of positive relationships between individuals with varying degrees of power. […] The power literature is clear in showing that more powerful individuals attend less to their social context, including the people in it, than do less powerful individuals, and the literature suggests that supervisors (compared with subordinates) might tend to place less value on the relationship and be less attuned to their partner’s needs. Yet the formal power accorded to supervisors by the organization — via the supervisory role — is accompanied by the role prescribed responsibility for the performance, motivation, and well-being of subordinates. Thus, the accountability for the formation of a positive supervisory relationship lies more heavily with the supervisor. […] As we examine the qualities of positive supervisory relationships, we make a clear distinction between effective supervisory behaviors and positive supervisory relationships. This is an important distinction […] a large body of leadership research has focused on traits or behaviors of supervisors […] and the affective, motivational, and behavioral responses of employees to those behaviors, with little attention paid to the interactions between the two. There are two practical implications of moving the focus from individuals to relationships: (1) supervisors who use “effective” leadership behaviors may or may not have positive relationships with employees; and (2) supervisors who have a positive relationship with one employee may not have equally positive relationships with other employees, even if they use the same “effective” behaviors.”

There is a large and well-developed stream of research that focuses explicitly on exchanges between supervisors and the employees who report directly to them. Leader–member exchange theory addresses the various types of functional relationships that can be formed between supervisors and subordinates. A core assumption of LMX theory is that supervisors do not have the time or resources to develop equally positive relationships with all subordinates. Thus, to minimize their investment and yield the greatest results for the organization, supervisors would develop close relationships with only a few subordinates […] These few high-quality relationships are marked by high levels of trust, loyalty, and support, whereas the balance of supervisory relationships are contractual in nature and depends on timely rewards allotted by supervisors in direct exchange for desirable behaviors […] There has been considerable confusion and debate in the literature about LMX theory and the construct validity of LMX measures […] Despite shortcomings in LMX research, it is [however] clear that supervisors form relationships of varying quality with subordinates […] Among factors associated with high LMX are the supervisor’s level of agreeableness […] and the employee’s level of extraversion […], feedback seeking […], and (negatively) negative affectivity […]. Those who perceived similarity in terms of family, money, career strategies, goals in life, education […], and gender […] also reported high LMX. […] Employee LMX is strongly related to attitudes, such as job satisfaction […] Supporting the notion that a positive supervisory relationship is good for employees, the LMX literature is replete with studies linking high LMX with thriving and autonomous motivation. […] The premise of the LMX research is that supervisory resources are limited and high-quality relationships are demanding. Thus, supervisor will be most effective when they allocate their resources efficiently and effectively, forming some high-quality and some instrumental relationships. But the empirical research from the lMX literature provides little (if any) evidence that supervisors who differentiate are more effective”.

The norm of negative reciprocity obligates targets of harm to reciprocate with actions that produce roughly equivalent levels of harm — if someone is unkind to me, I should be approximately as unkind to him or her. […] But the trajectory of negative reciprocity differs in important ways when there are power asymmetries between the parties involved in a negative exchange relationship. The workplace revenge literature suggests that low-power targets of hostility generally withhold retaliatory acts. […] In exchange relationships where one actor is more dependent on the other for valued resources, the dependent/less powerful actor’s ability to satisfy his or her self-interests will be constrained […]. Subordinate targets of supervisor hostility should therefore be less able (than supervisor targets of subordinate hostility) to return the injuries they sustain […] To the extent subordinate contributions to negative exchanges are likely to trigger disciplinary responses by the supervisor target (e.g., reprimands, demotion, transfer, or termination), we can expect that subordinates will withhold negative reciprocity.”

“In the last dozen years, much has been learned about the contributions that supervisors make to negative exchanges with subordinates. […] Several dozen studies have examined the consequences of supervisor contributions to negative exchanges. This work suggests that exposure to supervisor hostility is negatively related to subordinates’ satisfaction with the job […], affective commitment to the organization […], and both in-role and extra-role performance contributions […] and is positively related to subordinates’ psychological distress […], problem drinking […], and unit-level counterproductive work behavior […]. Exposure to supervisor hostility has also been linked with family undermining behavior — employees who are the targets of abusive supervision are more likely to be hostile toward their own family members […] Most studies of supervisor hostility have accounted for moderating factors — individual and situational factors that buffer or exacerbate the effects of exposure. For example, Tepper (2000) found that the injurious effects of supervisor hostility on employees’ attitudes and strain reactions were stronger when subordinates have less job mobility and therefore feel trapped in jobs that deplete their coping resources. […] Duffy, Ganster, Shaw, Johnson, and Pagon (2006) found that the effects of supervisor hostility are more pronounced when subordinates are singled out rather than targeted along with multiple coworkers. […] work suggests that the effects of abusive supervision on subordinates’ strain reactions are weaker when subordinates employ impression management strategies […] and more confrontational (as opposed to avoidant) communication tactics […]. It is clear that not all subordinates react the same way to supervisor hostility and characteristics of subordinates and the context influence the trajectory of subordinates’ responses. […] In a meta-analytic examination of studies of the correlates of supervisor-directed hostility, Herschovis et al. (2007) found support for the idea that subordinates who believe that they have been the target of mistreatment are more likely to lash out at their supervisors. […] perhaps just as interesting as the associations that have been uncovered are several hypothesized associations that have not emerged. Greenberg and Barling (1999) found that supervisor-directed aggression was unrelated to subordinates’ alcohol consumption, history of aggression, and job security. Other work has revealed mixed results for the prediction that subordinate self-esteem will negatively predict supervisor-directed hostility (Inness, Barling, & Turner, 2005). […] Negative exchanges between supervisors and subordinates do not play out in isolation — others observe them and are affected by them. Yet little is known about the affective, cognitive, and behavioral responses of third parties to negative exchanges with supervisors.”

August 8, 2018 Posted by | Books, Psychology | Leave a comment

Combinatorics (I)

This book is not a particularly easy read, compared to what is the general format of the series in which it is published, but this is a good thing in my view as it also means the author managed to go into enough details in specific contexts to touch upon at least some properties/topics of interest. You don’t need any specific background knowledge to read and understand the book – at least not any sort of background knowledge one would not expect someone who might decide to read a book like this one to already have – but you do need when reading it to have the sort of mental surplus that enables you to think carefully about what’s going on and devote a few mental resources to understanding the details.

Some quotes and links from the first half of the book below.

“The subject of combinatorial analysis or combinatorics […] [w]e may loosely describe [as] the branch of mathematics concerned with selecting, arranging, constructing, classifying, and counting or listing things. […] the subject involves finite sets or discrete elements that proceed in separate steps […] rather than continuous systems […] Mathematicians sometimes use the term ‘combinatorics’ to refer to a larger subset of discrete mathematics that includes graph theory. In that case, what is commonly called combinatorics is then referred to as ‘enumeration’. […] Combinatorics now includes a wide range of topics, some of which we cover in this book, such as the geometry of tilings and polyhedra […], the theory of graphs […], magic squares and latin squares […], block designs and finite projective planes […], and partitions of numbers […]. [The] chapters [of the book] are largely independent of each other and can be read in any order. Much of combinatorics originated in recreational pastimes […] in recent years the subject has developed in depth and variety and has increasingly become a part of mainstream mathematics. […] Undoubtedly part of the reason for the subject’s recent importance has arisen from the growth of computer science and the increasing use of algorithmic methods for solving real-world practical problems. These have led to combinatorial applications in a wide range of subject areas, both within and outside mathematics, including network analysis, coding theory, probability, virology, experimental design, scheduling, and operations research.”

“[C]ombinatorics is primarily concerned with four types of problem:
Existence problem: Does □□□ exist?
Construction problem: If □□□ exists, how can we construct it?
Enumeration problem: How many □□□ are there?
Optimization problem: Which □□□ is best? […]
[T]hese types of problems are not unrelated; for example, the easiest way to prove that something exists may be to construct it explicitly.”

“In this book we consider two types of enumeration problem – counting problems in which we simply wish to know the number of objects involved, and listing problems in which we want to list them all explicitly. […] It’s useful to have some basic counting rules […] In what follows, all the sets are finite. […] In general we have the following rule; here, subsets are disjoint if they have no objects in common: Addition rule: To find the number of objects in a set, split the set into disjoint subsets, count the objects in each subset, and add the results. […] Subtraction rule: If a set of objects can be split into two subsets A and B, then the number of objects in B is obtained by subtracting the number of objects in A from the number in the whole set. […] The subtraction rule extends easily to sets that are split into more than two subsets with no elements in common. […] the inclusion-exclusion principle […] extends this simple idea to the situation where the subsets may have objects in common. […] In general we have the following result: Multiplication rule: If a counting problem can be split into stages with several options at each stage, then the total number of possibilities is the product of options at each stage. […] Another useful principle in combinatorics is the following: Correspondence rule: We can solve a counting problem if we can put the objects to be counted in one-to-one correspondence with the objects of a set that we have already counted. […] We conclude this section with one more rule: Division rule: If a set of n elements can be split into m disjoint subsets, each of size k, then m = n / k.”

“Every algorithm has a running time […] this may be the time that a computer needs to carry out all the necessary calculations, or the actual number of such calculations. Each problem [also] has an input size […] the running time T usually depends on the input size n. Particularly important, because they’re the most efficient, are the polynomial-time algorithms, where the maximum running time is proportional to a power of the input size […] The collection of all polynomial-time algorithms is called P. […] In contrast, there are inefficient algorithms that don’t take polynomial time, such as the exponential-time algorithms […] At this point we introduce NP, the set of ‘non-deterministic polynomial-time problems’. These are algorithms for which a solution, when given, can be checked in polynomial time. Clearly P is contained in NP, since if a problem can be solved in polynomial time then a solution can certainly be checked in polynomial time – checking solutions is far easier than finding them in the first place. But are they the same? […] Few people people believe that the answer is ‘yes’, but no one has been able to prove that P ≠ NP. […] a problem is NP-complete if its solution in polynomial time means that every NP problem can be solved in polynomial time. […] If there were a polynomial algorithm for just one of them, then polynomial algorithms would exist for the whole lot and P would equal NP. On the other hand, if just one of them has no polynomial algorithm, then none of the others could have a polynomial algorithm either, and P would be different from NP.”

“In how many different ways can n objects be arranged? […] generally, we have the following result: Arrangements: The number of arrangements of n objects is n x (n -1) x (n – 2) x … x 3 x 2 x 1. This number is called n factorial and is denoted by n!. […] The word permutation is used in different ways. We’ll use it to mean an ordered selection without repetition, while others may use it to mean an arrangement […] generally, we have the following rule: Ordered selections without repetition (permutations): If we select k items from a set of n objects, and if the selections are ordered and repetition is not allowed, then the number of possible selections is n x (n – 1) x (n – 2) x … x (n – k +1). We denote this expression by P(n,k). […] Since P(n, n) = n x (n -1) x (n – 2) x … x 3 x 2 x 1 = n!, an arrangement is a permutation for which k = n. […] generally, we have the following result: P(n,k) = n! /(n-k)!. […] unordered selections without repetition are called combinations, giving rise to the words combinatorial and combinatorics. […] generally, we have the following result: Unordered selections without repetition (combinations): If we select k items from a set of n objects, and if the selections are unordered and repetition is not allowed, then the number of possible selections is P(n,k)/k! = n x (n-1) x (n-2) x … x (n – k + 1)/k!. We denote this expression by C(n,k) […] Unordered selections with repetition: If we select k items from a set of n objects, and if the selections are unordered and repetition is allowed, then the number of possible selections is C(n + k – 1, k). […] Combination rule 1: For any numbers k and n with n, C(n,k) = C(n,n-k) […] Combination rule 2: For any numbers n and k with n, C(n, n-k) = n!/(n-k)!(n-(n-k))! = n!/(n-k)!k! = C(n,k). […] Combination rule 3: For any number n, C(n,0) + C(n,1) + C(n,2) + … + C(n,n-1) + C(n,n) = 2n

Links:

Tilings/Tessellation.
Knight’s tour.
Seven Bridges of Königsberg problem.
Three utilities problem.
Four color theorem.
Tarry’s algorithm (p.7) (formulated slightly differently in the book, but it’s the same algorithm).
Polyomino.
Arthur Cayley.
Combinatorial principles.
Minimum connector problem.
Travelling salesman problem.
Algorithmic efficiency. Running time/time complexity.
Boolean satisfiability problem. Cook–Levin theorem.
Combination.
Mersenne primes.
Permutation. Factorial. Stirling’s formula.
Birthday problem.
Varāhamihira.
Manhattan distance.
Fibonacci number.
Pascal’s triangle. Binomial coefficient. Binomial theorem.
Pigeonhole principle.
Venn diagram.
Derangement (combinatorial mathematics).
Tower of Hanoi.
Stable marriage problem. Transversal (combinatorics). Hall’s marriage theorem.
Generating function (the topic covered in the book more specifically is related to a symbolic generator of the subsets of a set, but a brief search yielded no good links to this particular topic – US).
Group theory.
Ferdinand Frobenius. Burnside’s lemma.

August 4, 2018 Posted by | Books, Computer science, Mathematics | 1 Comment

Words

The words below are mostly words which I encountered while reading the books Pocket oncology, Djinn Rummy, Open Sesame, and The Far Side of the World.

Hematochezia. Neuromyotonia. Anoproctitis. Travelator. Brassica. Physiatry. Clivus. Curettage. Colposcopy. Trachelectomy. Photopheresis. Myelophthisis. Apheresis. Vexilloid. Gonfalon. Eutectic. Clerisy. Frippery. Scrip. Bludge.

Illude. Empyrean. Bonzer. Vol-au-vent. Curule. Entrechat. Winceyette. Attar. Woodbine. Corolla. Rennet. Gusset. Jacquard. Antipodean. Chaplet. Thrush. Coloratura. Biryani. Caff. Scrummy.

Beatific. Forecourt. Hurtle. Freemartin. Coleoptera. Hemipode. Bespeak. Dickey. Bilbo. Hale. Grampus. Calenture. Reeve. Cribbing. Fleam. Totipalmate. Bonito. Blackstrake/Black strake. Shank. Caiman.

Chancery. Acullico. Thole. Aorist. Westing. Scorbutic. Voyol. Fribble. Terraqueous. Oviparous. Specktioneer. Aprication. Phalarope. Lough. Hoy. Reel. Trachyte. Woulding. Anthropophagy. Risorgimento.

 

August 2, 2018 Posted by | Books, Language | Leave a comment

Quotes

i. “Progress in science is often built on wrong theories that are later corrected. It is better to be wrong than to be vague.” (Freeman Dyson)

ii. “The teacher’s equipment gives him an everlasting job. His work is never done. His getting ready for this work is never quite complete.” (George Trumbull Ladd)

iii. “The crust of our earth is a great cemetery, where the rocks are tombstones on which the buried dead have written their own epitaphs.” (Louis Agassiz)

iv. “Fortunately science, like that nature to which it belongs, is neither limited by time nor by space. It belongs to the world, and is of no country and of no age. The more we know, the more we feel our ignorance […] there are always new worlds to conquer.” (Humphrey Davy)

v. “Nothing is so fatal to the progress of the human mind as to suppose that our views of science are ultimate; that there are no mysteries in nature; that our triumphs are complete, and that there are no new worlds to conquer.” (-ll-)

vi. “The best way to learn Japanese is to be born as a Japanese baby, in Japan, raised by a Japanese family.” (Dave Barry)

vii. “What makes a date so dreadful is the weight of expectation attached to it. There is every chance that you may meet your soulmate, get married, have children and be buried side by side. There is an equal chance that the person you meet will look as if they’ve already been buried for some time.” (Guy Browning)

viii. “Always judge your fellow passengers to be the opposite of what they strive to appear to be. […] men never affect to be what they are, but what they are not.” (Thomas Chandler Haliburton)

ix. “Some folks can look so busy doin’ nothin’ that they seem indispensable.” (Kin Hubbard)

x. “Men are not punished for their sins, but by them.” (-ll-)

xi. “Do what we will, we always, more or less, construct our own universe. The history of science may be described as the history of the attempts, and the failures, of men “to see things as they are.”” (Matthew Moncrieff Pattison Muir)

xii. “You simply cannot invent any conspiracy theory so ridiculous and obviously satirical that some people somewhere don’t already believe it.” (Robert Anton Wilson)

xiii. “You know you are getting old when work is a lot less fun and fun is a lot more work.” (Joan Rivers)

xiv. “When I was a little boy, I used to pray every night for a new bicycle. Then I realised, the Lord, in his wisdom, doesn’t work that way. So I just stole one and asked Him to forgive me.” (Emo Philips)

xv. “I was walking down Fifth Avenue today and I found a wallet, and I was gonna keep it, rather than return it, but I thought: “Well, if I lost a hundred and fifty dollars, how would I feel?” And I realized I would want to be taught a lesson.” (-ll-)

xvi. “When I said I was going to become a comedian, they all laughed. Well, they’re not laughing now, are they?” (Robert Monkhouse)

xvii. “Things said in embarrassment and anger are seldom the truth, but are said to hurt and wound the other person. Once said, they can never be taken back.” (Lucille Ball)

xviii. “The beginning of wisdom for a programmer is to recognize the difference between getting his program to work and getting it right. A program which does not work is undoubtedly wrong; but a program which does work is not necessarily right. It may still be wrong because it is hard to understand; or because it is hard to maintain as the problem requirements change; or because its structure is different from the structure of the problem; or because we cannot be sure that it does indeed work.” (Michael Anthony Jackson)

xix. “One of the difficulties in thinking about software is its huge variety. A function definition in a spreadsheet cell is software. A smartphone app is software. The flight management system for an Airbus A380 is software. A word processor is software. We shouldn’t expect a single discipline of software engineering to cover all of these, any more than we expect a single discipline of manufacturing to cover everything from the Airbus A380 to the production of chocolate bars, or a single discipline of social organization to cover everything from the United Nations to a kindergarten. Improvement in software engineering must come bottom-up, from intense specialized attention to particular products.” (-ll-)

xx. “Let the world know you as you are, not as you think you should be, because sooner or later, if you are posing, you will forget the pose, and then where are you?” (Fanny Brice)

July 30, 2018 Posted by | Quotes/aphorisms | Leave a comment

Lyapunov Arguments in Optimization

I’d say that if you’re interested in the intersection of mathematical optimization methods/-algorithms and dynamical systems analysis it’s probably a talk well worth watching. The lecture is reasonably high-level and covers a fairly satisfactory amount of ground in a relatively short amount of time, and it is not particularly hard to follow if you have at least some passing familiarity with the fields involved (dynamical systems analysis, statistics, mathematical optimization, computer science/machine learning).

Some links:

Dynamical system.
Euler–Lagrange equation.
Continuous optimization problem.
Gradient descent algorithm.
Lyapunov stability.
Condition number.
Fast (/accelerated-) gradient descent methods.
The Mirror Descent Algorithm.
Cubic regularization of Newton method and its global performance (Nesterov & Polyak).
A Differential Equation for Modeling Nesterov’s Accelerated Gradient Method: Theory and Insights (Su, Boyd & Candès).
A Variational Perspective on Accelerated Methods in Optimization (Wibisono, Wilson & Jordan).
Breaking Locality Accelerates Block Gauss-Seidel (Tu, Venkataraman, Wilson, Gittens, Jordan & Recht).
A Lyapunov Analysis of Momentum Methods in Optimization (Wilson, Recht & Jordan).
Bregman divergence.
Estimate sequence methods.
Variance reduction techniques.
Stochastic gradient descent.
Langevin dynamics.

 

July 22, 2018 Posted by | Computer science, Lectures, Mathematics, Physics, Statistics | Leave a comment

Big Data (II)

Below I have added a few observation from the last half of the book, as well as some coverage-related links to topics of interest.

“With big data, using correlation creates […] problems. If we consider a massive dataset, algorithms can be written that, when applied, return a large number of spurious correlations that are totally independent of the views, opinions, or hypotheses of any human being. Problems arise with false correlations — for example, divorce rate and margarine consumption […]. [W]hen the number of variables becomes large, the number of spurious correlations also increases. This is one of the main problems associated with trying to extract useful information from big data, because in doing so, as with mining big data, we are usually looking for patterns and correlations. […] one of the reasons Google Flu Trends failed in its predictions was because of these problems. […] The Google Flu Trends project hinged on the known result that there is a high correlation between the number of flu-related online searches and visits to the doctor’s surgery. If a lot of people in a particular area are searching for flu-related information online, it might then be possible to predict the spread of flu cases to adjoining areas. Since the interest is in finding trends, the data can be anonymized and hence no consent from individuals is required. Using their five-year accumulation of data, which they limited to the same time-frame as the CDC data, and so collected only during the flu season, Google counted the weekly occurrence of each of the fifty million most common search queries covering all subjects. These search query counts were then compared with the CDC flu data, and those with the highest correlation were used in the flu trends model. […] The historical data provided a baseline from which to assess current flu activity on the chosen search terms and by comparing the new real-time data against this, a classification on a scale from 1 to 5, where 5 signified the most severe, was established. Used in the 2011–12 and 2012–13 US flu seasons, Google’s big data algorithm famously failed to deliver. After the flu season ended, its predictions were checked against the CDC’s actual data. […] the Google Flu Trends algorithm over-predicted the number of flu cases by at least 50 per cent during the years it was used.” [For more details on why blind/mindless hypothesis testing/p-value hunting on big data sets is usually a terrible idea, see e.g. Burnham & Anderson, US]

“The data Google used [in the Google Flu Trends algorithm], collected selectively from search engine queries, produced results [with] obvious bias […] for example by eliminating everyone who does not use a computer and everyone using other search engines. Another issue that may have led to poor results was that customers searching Google on ‘flu symptoms’ would probably have explored a number of flu-related websites, resulting in their being counted several times and thus inflating the numbers. In addition, search behaviour changes over time, especially during an epidemic, and this should be taken into account by updating the model regularly. Once errors in prediction start to occur, they tend to cascade, which is what happened with the Google Flu Trends predictions: one week’s errors were passed along to the next week. […] [Similarly,] the Ebola prediction figures published by WHO [during the West African Ebola virus epidemic] were over 50 per cent higher than the cases actually recorded. The problems with both the Google Flu Trends and Ebola analyses were similar in that the prediction algorithms used were based only on initial data and did not take into account changing conditions. Essentially, each of these models assumed that the number of cases would continue to grow at the same rate in the future as they had before the medical intervention began. Clearly, medical and public health measures could be expected to have positive effects and these had not been integrated into the model.”

“Every time a patient visits a doctor’s office or hospital, electronic data is routinely collected. Electronic health records constitute legal documentation of a patient’s healthcare contacts: details such as patient history, medications prescribed, and test results are recorded. Electronic health records may also include sensor data such as Magnetic Resonance Imaging (MRI) scans. The data may be anonymized and pooled for research purposes. It is estimated that in 2015, an average hospital in the USA will store over 600 Tb of data, most of which is unstructured. […] Typically, the human genome contains about 20,000 genes and mapping such a genome requires about 100 Gb of data. […] The interdisciplinary field of bioinformatics has flourished as a consequence of the need to manage and analyze the big data generated by genomics. […] Cloud-based systems give authorized users access to data anywhere in the world. To take just one example, the NHS plans to make patient records available via smartphone by 2018. These developments will inevitably generate more attacks on the data they employ, and considerable effort will need to be expended in the development of effective security methods to ensure the safety of that data. […] There is no absolute certainty on the Web. Since e-documents can be modified and updated without the author’s knowledge, they can easily be manipulated. This situation could be extremely damaging in many different situations, such as the possibility of someone tampering with electronic medical records. […] [S]ome of the problems facing big data systems [include] ensuring they actually work as intended, [that they] can be fixed when they break down, and [that they] are tamper-proof and accessible only to those with the correct authorization.”

“With transactions being made through sales and auction bids, eBay generates approximately 50 Tb of data a day, collected from every search, sale, and bid made on their website by a claimed 160 million active users in 190 countries. […] Amazon collects vast amounts of data including addresses, payment information, and details of everything an individual has ever looked at or bought from them. Amazon uses its data in order to encourage the customer to spend more money with them by trying to do as much of the customer’s market research as possible. In the case of books, for example, Amazon needs to provide not only a huge selection but to focus recommendations on the individual customer. […] Many customers use smartphones with GPS capability, allowing Amazon to collect data showing time and location. This substantial amount of data is used to construct customer profiles allowing similar individuals and their recommendations to be matched. Since 2013, Amazon has been selling customer metadata to advertisers in order to promote their Web services operation […] Netflix collects and uses huge amounts of data to improve customer service, such as offering recommendations to individual customers while endeavouring to provide reliable streaming of its movies. Recommendation is at the heart of the Netflix business model and most of its business is driven by the data-based recommendations it is able to offer customers. Netflix now tracks what you watch, what you browse, what you search for, and the day and time you do all these things. It also records whether you are using an iPad, TV, or something else. […] As well as collecting search data and star ratings, Netflix can now keep records on how often users pause or fast forward, and whether or not they finish watching each programme they start. They also monitor how, when, and where they watched the programme, and a host of other variables too numerous to mention.”

“Data science is becoming a popular study option in universities but graduates so far have been unable to meet the demands of commerce and industry, where positions in data science offer high salaries to experienced applicants. Big data for commercial enterprises is concerned with profit, and disillusionment will set in quickly if an over-burdened data analyst with insufficient experience fails to deliver the expected positive results. All too often, firms are asking for a one-size-fits-all model of data scientist who is expected to be competent in everything from statistical analysis to data storage and data security.”

“In December 2016, Yahoo! announced that a data breach involving over one billion user accounts had occurred in August 2013. Dubbed the biggest ever cyber theft of personal data, or at least the biggest ever divulged by any company, thieves apparently used forged cookies, which allowed them access to accounts without the need for passwords. This followed the disclosure of an attack on Yahoo! in 2014, when 500 million accounts were compromised. […] The list of big data security breaches increases almost daily. Data theft, data ransom, and data sabotage are major concerns in a data-centric world. There have been many scares regarding the security and ownership of personal digital data. Before the digital age we used to keep photos in albums and negatives were our backup. After that, we stored our photos electronically on a hard-drive in our computer. This could possibly fail and we were wise to have back-ups but at least the files were not publicly accessible. Many of us now store data in the Cloud. […] If you store all your photos in the Cloud, it’s highly unlikely with today’s sophisticated systems that you would lose them. On the other hand, if you want to delete something, maybe a photo or video, it becomes difficult to ensure all copies have been deleted. Essentially you have to rely on your provider to do this. Another important issue is controlling who has access to the photos and other data you have uploaded to the Cloud. […] although the Internet and Cloud-based computing are generally thought of as wireless, they are anything but; data is transmitted through fibre-optic cables laid under the oceans. Nearly all digital communication between continents is transmitted in this way. My email will be sent via transatlantic fibre-optic cables, even if I am using a Cloud computing service. The Cloud, an attractive buzz word, conjures up images of satellites sending data across the world, but in reality Cloud services are firmly rooted in a distributed network of data centres providing Internet access, largely through cables. Fibre-optic cables provide the fastest means of data transmission and so are generally preferable to satellites.”

Links:

Health care informatics.
Electronic health records.
European influenza surveillance network.
Overfitting.
Public Health Emergency of International Concern.
Virtual Physiological Human project.
Watson (computer).
Natural language processing.
Anthem medical data breach.
Electronic delay storage automatic calculator (EDSAC). LEO (computer). ICL (International Computers Limited).
E-commerce. Online shopping.
Pay-per-click advertising model. Google AdWords. Click fraud. Targeted advertising.
Recommender system. Collaborative filtering.
Anticipatory shipping.
BlackPOS Malware.
Data Encryption Standard algorithm. EFF DES cracker.
Advanced Encryption Standard.
Tempora. PRISM (surveillance program). Edward Snowden. WikiLeaks. Tor (anonymity network). Silk Road (marketplace). Deep web. Internet of Things.
Songdo International Business District. Smart City.
United Nations Global Pulse.

July 19, 2018 Posted by | Books, Computer science, Cryptography, Data, Engineering, Epidemiology, Statistics | Leave a comment

Developmental Biology (II)

Below I have included some quotes from the middle chapters of the book and some links related to the topic coverage. As I already pointed out earlier, this is an excellent book on these topics.

Germ cells have three key functions: the preservation of the genetic integrity of the germline; the generation of genetic diversity; and the transmission of genetic information to the next generation. In all but the simplest animals, the cells of the germline are the only cells that can give rise to a new organism. So, unlike body cells, which eventually all die, germ cells in a sense outlive the bodies that produced them. They are, therefore, very special cells […] In order that the number of chromosomes is kept constant from generation to generation, germ cells are produced by a specialized type of cell division, called meiosis, which halves the chromosome number. Unless this reduction by meiosis occurred, the number of chromosomes would double each time the egg was fertilized. Germ cells thus contain a single copy of each chromosome and are called haploid, whereas germ-cell precursor cells and the other somatic cells of the body contain two copies and are called diploid. The halving of chromosome number at meiosis means that when egg and sperm come together at fertilization, the diploid number of chromosomes is restored. […] An important property of germ cells is that they remain pluripotent—able to give rise to all the different types of cells in the body. Nevertheless, eggs and sperm in mammals have certain genes differentially switched off during germ-cell development by a process known as genomic imprinting […] Certain genes in eggs and sperm are imprinted, so that the activity of the same gene is different depending on whether it is of maternal or paternal origin. Improper imprinting can lead to developmental abnormalities in humans. At least 80 imprinted genes have been identified in mammals, and some are involved in growth control. […] A number of developmental disorders in humans are associated with imprinted genes. Infants with Prader-Willi syndrome fail to thrive and later can become extremely obese; they also show mental retardation and mental disturbances […] Angelman syndrome results in severe motor and mental retardation. Beckwith-Wiedemann syndrome is due to a generalized disruption of imprinting on a region of chromosome 7 and leads to excessive foetal overgrowth and an increased predisposition to cancer.”

“Sperm are motile cells, typically designed for activating the egg and delivering their nucleus into the egg cytoplasm. They essentially consist of a nucleus, mitochondria to provide an energy source, and a flagellum for movement. The sperm contributes virtually nothing to the organism other than its chromosomes. In mammals, sperm mitochondria are destroyed following fertilization, and so all mitochondria in the animal are of maternal origin. […] Different organisms have different ways of ensuring fertilization by only one sperm. […] Early development is similar in both male and female mammalian embryos, with sexual differences only appearing at later stages. The development of the individual as either male or female is genetically fixed at fertilization by the chromosomal content of the egg and sperm that fuse to form the fertilized egg. […] Each sperm carries either an X or Y chromosome, while the egg has an X. The genetic sex of a mammal is thus established at the moment of conception, when the sperm introduces either an X or a Y chromosome into the egg. […] In the absence of a Y chromosome, the default development of tissues is along the female pathway. […] Unlike animals, plants do not set aside germ cells in the embryo and germ cells are only specified when a flower develops. Any meristem cell can, in principle, give rise to a germ cell of either sex, and there are no sex chromosomes. The great majority of flowering plants give rise to flowers that contain both male and female sexual organs, in which meiosis occurs. The male sexual organs are the stamens; these produce pollen, which contains the male gamete nuclei corresponding to the sperm of animals. At the centre of the flower are the female sex organs, which consist of an ovary of two carpels, which contain the ovules. Each ovule contains an egg cell.”

“The character of specialized cells such as nerve, muscle, or skin is the result of a particular pattern of gene activity that determines which proteins are synthesized. There are more than 200 clearly recognizable differentiated cell types in mammals. How these particular patterns of gene activity develop is a central question in cell differentiation. Gene expression is under a complex set of controls that include the actions of transcription factors, and chemical modification of DNA. External signals play a key role in differentiation by triggering intracellular signalling pathways that affect gene expression. […] the central feature of cell differentiation is a change in gene expression, which brings about a change in the proteins in the cells. The genes expressed in a differentiated cell include not only those for a wide range of ‘housekeeping’ proteins, such as the enzymes involved in energy metabolism, but also genes encoding cell-specific proteins that characterize a fully differentiated cell: hemoglobin in red blood cells, keratin in skin epidermal cells, and muscle-specific actin and myosin protein filaments in muscle. […] several thousand different genes are active in any given cell in the embryo at any one time, though only a small number of these may be involved in specifying cell fate or differentiation. […] Cell differentiation is known to be controlled by a wide range of external signals but it is important to remember that, while these external signals are often referred to as being ‘instructive’, they are ‘selective’, in the sense that the number of developmental options open to a cell at any given time is limited. These options are set by the cell’s internal state which, in turn, reflects its developmental history. External signals cannot, for example, convert an endodermal cell into a muscle or nerve cell. Most of the molecules that act as developmentally important signals between cells during development are proteins or peptides, and their effect is usually to induce a change in gene expression. […] The same external signals can be used again and again with different effects because the cells’ histories are different. […] At least 1,000 different transcription factors are encoded in the genomes of the fly and the nematode, and as many as 3,000 in the human genome. On average, around five different transcription factors act together at a control region […] In general, it can be assumed that activation of each gene involves a unique combination of transcription factors.”

“Stem cells involve some special features in relation to differentiation. A single stem cell can divide to produce two daughter cells, one of which remains a stem cell while the other gives rise to a lineage of differentiating cells. This occurs in our skin and gut all the time and also in the production of blood cells. It also occurs in the embryo. […] Embryonic stem (ES) cells from the inner cell mass of the early mammalian embryo when the primitive streak forms, can, in culture, differentiate into a wide variety of cell types, and have potential uses in regenerative medicine. […] it is now possible to make adult body cells into stem cells, which has important implications for regenerative medicine. […] The goal of regenerative medicine is to restore the structure and function of damaged or diseased tissues. As stem cells can proliferate and differentiate into a wide range of cell types, they are strong candidates for use in cell-replacement therapy, the restoration of tissue function by the introduction of new healthy cells. […] The generation of insulin-producing pancreatic β cells from ES cells to replace those destroyed in type 1 diabetes is a prime medical target. Treatments that direct the differentiation of ES cells towards making endoderm derivatives such as pancreatic cells have been particularly difficult to find. […] The neurodegenerative Parkinson disease is another medical target. […] To generate […] stem cells of the patient’s own tissue type would be a great advantage, and the recent development of induced pluripotent stem cells (iPS cells) offers […] exciting new opportunities. […] There is [however] risk of tumour induction in patients undergoing cell-replacement therapy with ES cells or iPS cells; undifferentiated pluripotent cells introduced into the patient could cause tumours. Only stringent selection procedures that ensure no undifferentiated cells are present in the transplanted cell population will overcome this problem. And it is not yet clear how stable differentiated ES cells and iPS cells will be in the long term.”

“In general, the success rate of cloning by body-cell nuclear transfer in mammals is low, and the reasons for this are not yet well understood. […] Most cloned mammals derived from nuclear transplantation are usually abnormal in some way. The cause of failure is incomplete reprogramming of the donor nucleus to remove all the earlier modifications. A related cause of abnormality may be that the reprogrammed genes have not gone through the normal imprinting process that occurs during germ-cell development, where different genes are silenced in the male and female parents. The abnormalities in adults that do develop from cloned embryos include early death, limb deformities and hypertension in cattle, and immune impairment in mice. All these defects are thought to be due to abnormalities of gene expression that arise from the cloning process. Studies have shown that some 5% of the genes in cloned mice are not correctly expressed and that almost half of the imprinted genes are incorrectly expressed.”

“Organ development involves large numbers of genes and, because of this complexity, general principles can be quite difficult to distinguish. Nevertheless, many of the mechanisms used in organogenesis are similar to those of earlier development, and certain signals are used again and again. Pattern formation in development in a variety of organs can be specified by position information, which is specified by a gradient in some property. […] Not surprisingly, the vascular system, including blood vessels and blood cells, is among the first organ systems to develop in vertebrate embryos, so that oxygen and nutrients can be delivered to the rapidly developing tissues. The defining cell type of the vascular system is the endothelial cell, which forms the lining of the entire circulatory system, including the heart, veins, and arteries. Blood vessels are formed by endothelial cells and these vessels are then covered by connective tissue and smooth muscle cells. Arteries and veins are defined by the direction of blood flow as well as by structural and functional differences; the cells are specified as arterial or venous before they form blood vessels but they can switch identity. […] Differentiation of the vascular cells requires the growth factor VEGF (vascular endothelial growth factor) and its receptors, and VEGF stimulates their proliferation. Expression of the Vegf gene is induced by lack of oxygen and thus an active organ using up oxygen promotes its own vascularization. New blood capillaries are formed by sprouting from pre-existing blood vessels and proliferation of cells at the tip of the sprout. […] During their development, blood vessels navigate along specific paths towards their targets […]. Many solid tumours produce VEGF and other growth factors that stimulate vascular development and so promote the tumour’s growth, and blocking new vessel formation is thus a means of reducing tumour growth. […] In humans, about 1 in 100 live-born infants has some congenital heart malformation, while in utero, heart malformation leading to death of the embryo occurs in between 5 and 10% of conceptions.”

“Separation of the digits […] is due to the programmed cell death of the cells between these digits’ cartilaginous elements. The webbed feet of ducks and other waterfowl are simply the result of less cell death between the digits. […] the death of cells between the digits is essential for separating the digits. The development of the vertebrate nervous system also involves the death of large numbers of neurons.”

Links:

Budding.
Gonad.
Down Syndrome.
Fertilization. In vitro fertilisation. Preimplantation genetic diagnosis.
SRY gene.
X-inactivation. Dosage compensation.
Cellular differentiation.
MyoD.
Signal transduction. Enhancer (genetics).
Epigenetics.
Hematopoiesis. Hematopoietic stem cell transplantation. Hemoglobin. Sickle cell anemia.
Skin. Dermis. Fibroblast. Epidermis.
Skeletal muscle. Myogenesis. Myoblast.
Cloning. Dolly.
Organogenesis.
Limb development. Limb bud. Progress zone model. Apical ectodermal ridge. Polarizing region/Zone of polarizing activity. Sonic hedgehog.
Imaginal disc. Pax6. Aniridia. Neural tube.
Branching morphogenesis.
Pistil.
ABC model of flower development.

July 16, 2018 Posted by | Biology, Books, Botany, Cancer/oncology, Diabetes, Genetics, Medicine, Molecular biology, Ophthalmology | Leave a comment

Big Data (I?)

Below a few observations from the first half of the book, as well as some links related to the topic coverage.

“The data we derive from the Web can be classified as structured, unstructured, or semi-structured. […] Carefully structured and tabulated data is relatively easy to manage and is amenable to statistical analysis, indeed until recently statistical analysis methods could be applied only to structured data. In contrast, unstructured data is not so easily categorized, and includes photos, videos, tweets, and word-processing documents. Once the use of the World Wide Web became widespread, it transpired that many such potential sources of information remained inaccessible because they lacked the structure needed for existing analytical techniques to be applied. However, by identifying key features, data that appears at first sight to be unstructured may not be completely without structure. Emails, for example, contain structured metadata in the heading as well as the actual unstructured message […] and so may be classified as semi-structured data. Metadata tags, which are essentially descriptive references, can be used to add some structure to unstructured data. […] Dealing with unstructured data is challenging: since it cannot be stored in traditional databases or spreadsheets, special tools have had to be developed to extract useful information. […] Approximately 80 per cent of the world’s data is unstructured in the form of text, photos, and images, and so is not amenable to the traditional methods of structured data analysis. ‘Big data’ is now used to refer not just to the total amount of data generated and stored electronically, but also to specific datasets that are large in both size and complexity, with which new algorithmic techniques are required in order to extract useful information from them.”

“In the digital age we are no longer entirely dependent on samples, since we can often collect all the data we need on entire populations. But the size of these increasingly large sets of data cannot alone provide a definition for the term ‘big data’ — we must include complexity in any definition. Instead of carefully constructed samples of ‘small data’ we are now dealing with huge amounts of data that has not been collected with any specific questions in mind and is often unstructured. In order to characterize the key features that make data big and move towards a definition of the term, Doug Laney, writing in 2001, proposed using the three ‘v’s: volume, variety, and velocity. […] ‘Volume’ refers to the amount of electronic data that is now collected and stored, which is growing at an ever-increasing rate. Big data is big, but how big? […] Generally, we can say the volume criterion is met if the dataset is such that we cannot collect, store, and analyse it using traditional computing and statistical methods. […] Although a great variety of data [exists], ultimately it can all be classified as structured, unstructured, or semi-structured. […] Velocity is necessarily connected with volume: the faster the data is generated, the more there is. […] Velocity also refers to the speed at which data is electronically processed. For example, sensor data, such as that generated by an autonomous car, is necessarily generated in real time. If the car is to work reliably, the data […] must be analysed very quickly […] Variability may be considered as an additional dimension of the velocity concept, referring to the changing rates in flow of data […] computer systems are more prone to failure [during peak flow periods]. […] As well as the original three ‘v’s suggested by Laney, we may add ‘veracity’ as a fourth. Veracity refers to the quality of the data being collected. […] Taken together, the four main characteristics of big data – volume, variety, velocity, and veracity – present a considerable challenge in data management.” [As regular readers of this blog might be aware, not everybody would agree with the author here about the inclusion of veracity as a defining feature of big data – “Many have suggested that there are more V’s that are important to the big data problem [than volume, variety & velocity] such as veracity and value (IEEE BigData 2013). Veracity refers to the trustworthiness of the data, and value refers to the value that the data adds to creating knowledge about a topic or situation. While we agree that these are important data characteristics, we do not see these as key features that distinguish big data from regular data. It is important to evaluate the veracity and value of all data, both big and small. (Knoth & Schmid)]

“Anyone who uses a personal computer, laptop, or smartphone accesses data stored in a database. Structured data, such as bank statements and electronic address books, are stored in a relational database. In order to manage all this structured data, a relational database management system (RDBMS) is used to create, maintain, access, and manipulate the data. […] Once […] the database [has been] constructed we can populate it with data and interrogate it using structured query language (SQL). […] An important aspect of relational database design involves a process called normalization which includes reducing data duplication to a minimum and hence reduces storage requirements. This allows speedier queries, but even so as the volume of data increases the performance of these traditional databases decreases. The problem is one of scalability. Since relational databases are essentially designed to run on just one server, as more and more data is added they become slow and unreliable. The only way to achieve scalability is to add more computing power, which has its limits. This is known as vertical scalability. So although structured data is usually stored and managed in an RDBMS, when the data is big, say in terabytes or petabytes and beyond, the RDBMS no longer works efficiently, even for structured data. An important feature of relational databases and a good reason for continuing to use them is that they conform to the following group of properties: atomicity, consistency, isolation, and durability, usually known as ACID. Atomicity ensures that incomplete transactions cannot update the database; consistency excludes invalid data; isolation ensures one transaction does not interfere with another transaction; and durability means that the database must update before the next transaction is carried out. All these are desirable properties but storing and accessing big data, which is mostly unstructured, requires a different approach. […] given the current data explosion there has been intensive research into new storage and management techniques. In order to store these massive datasets, data is distributed across servers. As the number of servers involved increases, the chance of failure at some point also increases, so it is important to have multiple, reliably identical copies of the same data, each stored on a different server. Indeed, with the massive amounts of data now being processed, systems failure is taken as inevitable and so ways of coping with this are built into the methods of storage.”

“A distributed file system (DFS) provides effective and reliable storage for big data across many computers. […] Hadoop DFS [is] one of the most popular DFS […] When we use Hadoop DFS, the data is distributed across many nodes, often tens of thousands of them, physically situated in data centres around the world. […] The NameNode deals with all requests coming in from a client computer; it distributes storage space, and keeps track of storage availability and data location. It also manages all the basic file operations (e.g. opening and closing files) and controls data access by client computers. The DataNodes are responsible for actually storing the data and in order to do so, create, delete, and replicate blocks as necessary. Data replication is an essential feature of the Hadoop DFS. […] It is important that several copies of each block are stored so that if a DataNode fails, other nodes are able to take over and continue with processing tasks without loss of data. […] Data is written to a DataNode only once but will be read by an application many times. […] One of the functions of the NameNode is to determine the best DataNode to use given the current usage, ensuring fast data access and processing. The client computer then accesses the data block from the chosen node. DataNodes are added as and when required by the increased storage requirements, a feature known as horizontal scalability. One of the main advantages of Hadoop DFS over a relational database is that you can collect vast amounts of data, keep adding to it, and, at that time, not yet have any clear idea of what you want to use it for. […] structured data with identifiable rows and columns can be easily stored in a RDBMS while unstructured data can be stored cheaply and readily using a DFS.”

NoSQL is the generic name used to refer to non-relational databases and stands for Not only SQL. […] The non-relational model has some features that are necessary in the management of big data, namely scalability, availability, and performance. With a relational database you cannot keep scaling vertically without loss of function, whereas with NoSQL you scale horizontally and this enables performance to be maintained. […] Within the context of a distributed database system, consistency refers to the requirement that all copies of data should be the same across nodes. […] Availability requires that if a node fails, other nodes still function […] Data, and hence DataNodes, are distributed across physically separate servers and communication between these machines will sometimes fail. When this occurs it is called a network partition. Partition tolerance requires that the system continues to operate even if this happens. In essence, what the CAP [Consistency, Availability, Partition Tolerance] Theorem states is that for any distributed computer system, where the data is shared, only two of these three criteria can be met. There are therefore three possibilities; the system must be: consistent and available, consistent and partition tolerant, or partition tolerant and available. Notice that since in a RDMS the network is not partitioned, only consistency and availability would be of concern and the RDMS model meets both of these criteria. In NoSQL, since we necessarily have partitioning, we have to choose between consistency and availability. By sacrificing availability, we are able to wait until consistency is achieved. If we choose instead to sacrifice consistency it follows that sometimes the data will differ from server to server. The somewhat contrived acronym BASE (Basically Available, Soft, and Eventually consistent) is used as a convenient way of describing this situation. BASE appears to have been chosen in contrast to the ACID properties of relational databases. ‘Soft’ in this context refers to the flexibility in the consistency requirement. The aim is not to abandon any one of these criteria but to find a way of optimizing all three, essentially a compromise. […] The name NoSQL derives from the fact that SQL cannot be used to query these databases. […] There are four main types of non-relational or NoSQL database: key-value, column-based, document, and graph – all useful for storing large amounts of structured and semi-structured data. […] Currently, an approach called NewSQL is finding a niche. […] the aim of this latent technology is to solve the scalability problems associated with the relational model, making it more useable for big data.”

“A popular way of dealing with big data is to divide it up into small chunks and then process each of these individually, which is basically what MapReduce does by spreading the required calculations or queries over many, many computers. […] Bloom filters are particularly suited to applications where storage is an issue and where the data can be thought of as a list. The basic idea behind Bloom filters is that we want to build a system, based on a list of data elements, to answer the question ‘Is X in the list?’ With big datasets, searching through the entire set may be too slow to be useful, so we use a Bloom filter which, being a probabilistic method, is not 100 per cent accurate—the algorithm may decide that an element belongs to the list when actually it does not; but it is a fast, reliable, and storage efficient method of extracting useful knowledge from data. Bloom filters have many applications. For example, they can be used to check whether a particular Web address leads to a malicious website. In this case, the Bloom filter would act as a blacklist of known malicious URLs against which it is possible to check, quickly and accurately, whether it is likely that the one you have just clicked on is safe or not. Web addresses newly found to be malicious can be added to the blacklist. […] A related example is that of malicious email messages, which may be spam or may contain phishing attempts. A Bloom filter provides us with a quick way of checking each email address and hence we would be able to issue a timely warning if appropriate. […] they can [also] provide a very useful way of detecting fraudulent credit card transactions.”

Links:

Data.
Punched card.
Clickstream log.
HTTP cookie.
Australian Square Kilometre Array Pathfinder.
The Millionaire Calculator.
Data mining.
Supervised machine learning.
Unsupervised machine learning.
Statistical classification.
Cluster analysis.
Moore’s Law.
Cloud storage. Cloud computing.
Data compression. Lossless data compression. Lossy data compression.
ASCII. Huffman algorithm. Variable-length encoding.
Data compression ratio.
Grayscale.
Discrete cosine transform.
JPEG.
Bit array. Hash function.
PageRank algorithm.
Common crawl.

July 14, 2018 Posted by | Books, Computer science, Data, Statistics | Leave a comment

Quotes

i. “I only study the things I like; I apply my mind only to matters that interest me. They’ll be useful — or useless — to me or to others in due course, I’ll be given — or not given — the opportunity of benefiting from what I’ve learned. In any case, I’ll have enjoyed the inestimable advantage of doing things I like doing and following my own inclinations.” (Nicolas Chamfort)

ii. “Every day I add to the list of things I refuse to discuss. The wiser the man, the longer the list.” (-ll-)

iii. “There are more fools than wise men, and even in a wise man there is more folly than wisdom.” (-ll-)

iv. “People are always annoyed by men of letters who retreat from the world; they expect them to continue to show interest in society even though they gain little benefit from it. They would like to force them be present when lots are being drawn in a lottery for which they have no tickets.” (-ll-)

v. “Eminence without merit earns deference without esteem.” (-ll-)

vi. “Not everyone is worth listening to.” (Alain de Botton)

vii. “Innovation comes from those who see things that other don’t.” (Steve Blank)

viii. “Writing improves in direct ratio to the number of things we can keep out of it that shouldn’t be there.” (William Zinsser)

ix. “Good approximations often lead to better ones.” (George Pólya)

x. “Children have to be educated, but they have also to be left to educate themselves.” (Ernest Dimnet)

xi. “Intellectual brilliance is no guarantee against being dead wrong.” (David Fasold)

xii. “Doubt is the beginning of wisdom. It means caution, independence, honesty and veracity. […] The man who never doubts never thinks.” (George William Foote)

xiii. “The idea that all problems either have a solution or can be shown to be pseudo-problems is not one I share.” (Raymond Geuss)

xiv. “Asking what the question is, and why the question is asked, is always asking a pertinent question.” (-ll-)

xv. “In many of the cases of conceptual innovation, … creating the conceptual tools is a precondition to coming to a clear understanding of what the problem was in the first place. It is very difficult to describe the transition after it has taken place because it is difficult for us to put ourselves back into the situation of confusion, indeterminacy, and perplexity that existed before the new “tool” brought clarity and this means it is difficult for us to retain a vivid sense of what a difference having the concept made.” (-ll-)

xvi. “I’m not a mathematician, but I’ve been hanging around with some of them long enough to know how the game is played.” (Brian Hayes)

xvii. “None is so deaf as those that will not hear.” (Matthew Henry)

xviii. “People who habitually speak positively of others tend to do so in all circumstances. Those who tend to criticize others in your presence and recruit you to agree with their cutting remarks will probably criticize you when you are out of the room.” (John Hoover (consultant))

xix. “People don’t learn much about themselves or others while they’re succeeding in spite of poor practices. When the real outcomes reflect the real work being done, the real learning begins.” (-ll-)

xx. “Respect yourself, if you want others to respect you.” (Adolph Freiherr Knigge)

July 12, 2018 Posted by | Quotes/aphorisms | Leave a comment

American Naval History (II)

I have added some observations and links related to the second half of the book‘s coverage below.

“The revival of the U.S. Navy in the last two decades of the nineteenth century resulted from a variety of circumstances. The most immediate was the simple fact that the several dozen ships retained from the Civil War were getting so old that they had become antiques. […] In 1883 therefore Congress authorized the construction of three new cruisers and one dispatch vessel, its first important naval appropriation since Appomattox. […] By 1896 […] five […] new battleships had been completed and launched, and a sixth (the Iowa) joined them a year later. None of these ships had been built to meet a perceived crisis or a national emergency. Instead the United States had finally embraced the navalist argument that a mature nation-state required a naval force of the first rank. Soon enough circumstances would offer an opportunity to test both the ships and the theory. […] the United States declared war against Spain on April 25, 1898. […] Active hostilities lasted barely six months and were punctuated by two entirely one-sided naval engagements […] With the peace treaty signed in Paris in December 1898, Spain granted Cuba its independence, though the United States assumed significant authority on the island and in 1903 negotiated a lease that gave the U.S. Navy control of Guantánamo Bay on Cuba’s south coast. Spain also ceded the Philippines, Puerto Rico, Guam, and Wake Island to the United States, which paid Spain $20 million for them. Separately but simultaneously the annexation of the Kingdom of Hawaii, along with the previous annexation of Midway, gave the United States a series of Pacific Ocean stepping stones, each a potential refueling stop, that led from Hawaii to Midway, to Wake, to Guam, and to the Philippines. It made the United States not merely a continental power but a global power. […] between 1906 and 1908, no fewer than thirteen new battleships joined the fleet.”

“At root submarine warfare in the twentieth century was simply a more technologically advanced form of commerce raiding. In its objective it resembled both privateering during the American Revolution and the voyages of the CSS Alabama and other raiders during the Civil War. Yet somehow striking unarmed merchant ships from the depths, often without warning, seemed particularly heinous. Just as the use of underwater mines in the Civil War had horrified contemporaries before their use became routine, the employment of submarines against merchant shipping shocked public sentiment in the early months of World War I. […] American submarines accounted for 55 percent of all Japanese ship losses in the Pacific theater of World War II”.

“By late 1942 the first products of the Two-Ocean Navy Act of 1940 began to join the fleet. Whereas in June 1942, the United States had been hard-pressed to assemble three aircraft carriers for the Battle of Midway, a year later twenty-four new Essex-class aircraft carriers joined the fleet, each of them displacing more than 30,000 tons and carrying ninety to one hundred aircraft. Soon afterward nine more Independence-class carriers joined the fleet. […] U.S. shipyards also turned out an unprecedented number of cruisers, destroyers, and destroyer escorts, plus more than 2,700 Liberty Ships—the essential transport and cargo vessels of the war—as well as thousands of specialized landing ships essential to amphibious operations. In 1943 alone American shipyards turned out more than eight hundred of the large LSTs and LCIs, plus more than eight thousand of the smaller landing craft known as Higgins boats […] In the three weeks after D-Day, Allied landing ships and transports put more than 300,000 men, fifty thousand vehicles, and 150,000 tons of supplies ashore on Omaha Beach alone. By the first week of July the Allies had more than a million fully equipped soldiers ashore ready to break out of their enclave in Normandy and Brittany […] Having entered World War II with eleven active battleships and seven aircraft carriers, the U.S. Navy ended the war with 120 battleships and cruisers and nearly one hundred aircraft carriers (including escort carriers). Counting the smaller landing craft, the U.S. Navy listed an astonishing sixty-five thousand vessels on its register of warships and had more than four million men and women in uniform. It was more than twice as large as all the rest of the navies of the world combined. […] In the eighteen months after the end of the war, the navy processed out 3.5 million officers and enlisted personnel who returned to civilian life and their families, going back to work or attending college on the new G.I. Bill. In addition thousands of ships were scrapped or mothballed, assigned to what was designated as the National Defense Reserve Fleet and tied up in long rows at navy yards from California to Virginia. Though the navy boasted only about a thousand ships on active service by the end of 1946, that was still more than twice as many as before the war.”

“The Korean War ended in a stalemate, yet American forces, supported by troops from South Korea and other United Nations members, succeeded in repelling the first cross-border invasion by communist forces during the Cold War. That encouraged American lawmakers to continue support of a robust peacetime navy, and of military forces generally. Whereas U.S. military spending in 1950 had totaled $141 billion, for the rest of the 1950s it averaged over $350 billion per year. […] The overall architecture of American and Soviet rivalry influenced, and even defined, virtually every aspect of American foreign and defense policy in the Cold War years. Even when the issue at hand had little to do with the Soviet Union, every political and military dispute from 1949 onward was likely to be viewed through the prism of how it affected the East-West balance of power. […] For forty years the United States and the U.S. Navy had centered all of its attention on the rivalry with the Soviet Union. All planning for defense budgets, for force structure, and for the design of weapons systems grew out of assessments of the Soviet threat. The dissolution of the Soviet Union therefore compelled navy planners to revisit almost all of their assumptions. It did not erase the need for a global U.S. Navy, for even as the Soviet Union was collapsing, events in the Middle East and elsewhere provoked serial crises that led to the dispatch of U.S. naval combat groups to a variety of hot spots around the world. On the other hand, these new threats were so different from those of the Cold War era that the sophisticated weaponry the United States had developed to deter and, if necessary, defeat the Soviet Union did not necessarily meet the needs of what President George H. W. Bush called “a new world order.”

“The official roster of U.S. Navy warships in 2014 listed 283 “battle force ships” on active service. While that is fewer than at any time since World War I, those ships possess more capability and firepower than the rest of the world’s navies combined. […] For the present, […] as well as for the foreseeable future, the U.S. Navy remains supreme on the oceans of the world.”

Links:

USS Ticonderoga (1862).
Virginius Affair.
ABCD ships.
Stephen Luce. Naval War College.
USS Maine. USS Texas. USS Indiana (BB-1). USS Massachusetts (BB-2). USS Oregon (BB-3). USS Iowa (BB-4).
Benjamin Franklin Tracy.
Alfred Thayer Mahan. The Influence of Sea Power upon History: 1660–1783.
George Dewey.
William T. Sampson.
Great White Fleet.
USS Maine (BB-10). USS Missouri (BB-11). USS New Hampshire (BB-25).
HMS Dreadnought (1906)Dreadnought. Pre-dreadnought battleship.
Hay–Herrán Treaty. United States construction of the Panama canal, 1904–1914.
Bradley A. Fiske.
William S. Benson. Chief of Naval Operations.
RMS Lusitania. Unrestricted submarine warfare.
Battle of Jutland. Naval Act of 1916 (‘Big Navy Act of 1916’).
William Sims.
Sacred Twenty. WAVES.
Washington Naval Treaty. ‘Treaty cruisers‘.
Aircraft carrier. USS Lexington (CV-2). USS Saratoga (CV-3).
War Plan Orange.
Carl Vinson. Naval Act of 1938.
Lend-Lease.
Battle of the Coral Sea. Battle of Midway.
Ironbottom Sound.
Battle of the Atlantic. Wolfpack (naval tactic).
Operation Torch.
Pacific Ocean theater of World War II. Battle of Leyte Gulf.
Operation Overlord. Operation Neptune. Alan Goodrich Kirk. Bertram Ramsay.
Battle of Iwo Jima. Battle of Okinawa.
Cold War. Revolt of the Admirals.
USS Nautilus. SSBN. USS George Washington.
Ohio-class submarine.
UGM-27 PolarisUGM-73 Poseidon. UGM-96 Trident I.
Korean War. Battle of Inchon.
United States Sixth Fleet.
Cuban Missile Crisis.
Vietnam War. USS Maddox. Gulf of Tonkin Resolution. Operation Market Time. Patrol Craft FastPatrol Boat, River. Operation Game Warden.
Elmo Zumwalt. ‘Z-grams’.
USS Cole bombing.
Operation Praying Mantis.
Gulf War.
Combined Task Force 150.
United States Navy SEALs.
USS Zumwalt.

July 12, 2018 Posted by | Books, History, Wikipedia | Leave a comment