Αρχειοθήκη ιστολογίου

Αναζήτηση αυτού του ιστολογίου

Κυριακή 19 Μαΐου 2019

Clinical and Translational Oncology

Correction to: SEOM Clinical Guideline for treatment of muscle-invasive and metastatic urothelial bladder cancer (2016)

Due to a technical issue, the family name of the author



Omitting the lower neck and sparing the glottic larynx in node-negative nasopharyngeal carcinoma was safe and feasible, and improved patient-reported voice outcomes

Abstract

Background

Worsening voice and speech quality was frequently reported in head-and-neck patients after radiotherapy to the neck; omitting the lower neck and sparing the glottic larynx in node-negative nasopharyngeal carcinoma (NPC) patients might be safe and feasible, and improve voice and speech outcomes.

Methods

From January 2009 to January 2013, 71 patients were analyzed. All patients received bilateral neck irradiation. Upper group (UG) patients spared the glottic larynx while lower group (LG) patients did not. Voice and speech quality were evaluated at two time-points (T1 and T2) using the Communication Domain of the Head and Neck Quality of Life (HNQOL) instrument and the Speech question of the University of Washington Quality of Life instrument.

Results

At a median follow-up time of 32 months (T1),71.6% of patients reported worsened voice and speech quality. UG patients resulted in significant decreases in glottic larynx dose. With a median follow-up time of 71 months (T2), no patients experienced out-of-field nodal recurrence;there was no difference in the 5-year overall survival and nodal recurrence-free survival between two groups (P = 0.235 and 0.750, respectively). At T1, in patients who without concurrent chemotherapy (CCT), UG patients showed significantly better patient-reported voice quality, (P = 0.022). UG patients without CCT also showed higher scores in the HNQOL communication domain and pain domain (P = 0.012 and P = 0.019).

Conclusions

For node-negative NPC patients, omitting the lower neck and sparing the glottic larynx was safe and feasible, and better voice outcomes were achieved in patients without CCT. Further prospective longitudinal studies to investigate whether this approach would be beneficial to node-negative patients are warranted.



Laparoscopy adjuvant total colorectal resection for the treatment of familial adenomatous polyposis (FAP)

Abstract

Objective

To discuss and evaluate the safety and value of laparoscopy adjuvant total colorectal resection for the treatment of familial adenomatous polyposis (FAP).

Methods

From March 2010 to June 2015, 38 cases were retrospectively analyzed and divided into 2 groups, of which 17 cases used laparoscopy adjuvant total colorectal resection, and 21 cases used conventional laparotomy. Clinical data were obtained, and the safety and prognosis were observed.

Results

Seventeen cases using laparoscopy adjuvant total colorectal resection achieved success with no conversion to laparotomy and intraoperative complications. There was no significant difference in operation time between the two groups. There were significant differences in blood loss, the length of incision, postoperative recovery time of intestinal function and postoperative hospital stay between the two groups (P < 0.05). The trauma in laparoscopy group was less, and could recover faster, and there was no significant difference in complications between the two groups. In addition, there were no recurrence, distant metastasis and death in the follow-up period from 6 to 56 months.

Conclusion

Laparoscopy adjuvant total colorectal resection is more safe and feasible, which has minimal invasion and can recover fast.



Active surveillance as a successful management strategy for patients with clinical stage I germ cell testicular cancer

Abstract

Background

Cancer-specific survival for patients with clinical stage I (CSI) germ cell testicular cancer (GCTC) is outstanding after inguinal orchidectomy regardless the treatment utilized. This study evaluated whether active surveillance (AS) of such patients yielded similar health outcomes to other therapeutic strategies such as adjuvant chemotherapy, radiotherapy or primary retroperitoneal lymphadenectomy as described in the literature.

Patients and methods

Patients with CSI GCTC were screened between January 2012 and December 2016. Patients had previously undergone inguinal orchidectomy as the primary treatment and chosen AS as their preferred management strategy after receiving information about all available strategies.

Results

Out of 91 patients screened, 82 patients selected AS as their preferred management strategy. Relapse rate in the overall population was 20% (95% CI 12–30) and median time to relapse was 11.5 months (range 1.0–35.0). In patients with seminomatous tumors, relapse rate decreased to 13% and median time to relapse was 13 months; whereas in patients with non-seminomatous tumors, relapse rate was 33% (IA) or 29% (IB) and median time to relapse was 12 months in stage IA and 4.5 months in stage IB patients. All relapses were rescued with three or four cycles of chemotherapy and two also required a retroperitoneal lymphadenectomy. All patients are currently alive and free of disease.

Conclusions

The clinical outcomes of patients with CSI GCTC managed by AS in this series were excellent. This strategy limited the administration of active treatments specifically to the minority of patients who relapsed without compromising performance.



Improvement of appropriate pharmacological prophylaxis in hospitalised cancer patients with a multiscreen e-alert system: a single-centre experience

Abstract

Purpose

Thromboprophylaxis use among medical inpatients, including cancer patients, is suboptimal. We aimed to evaluate the impact of a novel multiscreen version (v2.0) of an e-alert system for VTE prevention in hospitalised cancer medical patients compared to the original software.

Methods

Prospective study including 989 consecutive adult cancer patients with high-risk of VTE. Patients were followed-up 30 days post-discharge. Two periods were defined, according to the operative software.

Results

E-alert v2.0 was associated with an increase in the use of LMWH prophylaxis (65.5% vs. 72.0%); risk difference (95% CI) 0.064 (0.0043–0.12). Only 16% of patients in whom LMWH prophylaxis was not prescribed lacked a contraindication. No significant differences in the rates of VTE (2.9% vs. 3.2%) and major bleeding (2.7% vs. 4.0%) were observed.

Conclusions

E-alert v2.0 further increased the use of appropriate thromboprophylaxis in hospitalised cancer patients, although was not associated with a reduction in VTE incidence.



The dual effect of morphine on tumor development

Abstract

Morphine is a classic opioid drug used for reducing pain and is commonly prescribed as an effective drug to control cancer pain. Morphine has a direct role in the central nervous system to relieve pain, but because of its peripheral functions, morphine also has some side effects, such as nausea, constipation, and addiction (Gupta et al. in Sci World J 2015:10, 2015). In addition to its analgesic effect, the role of morphine in tumor development is an important question that has been investigated for many years with conflicting results. Numerous studies suggest that morphine has a role in both promoting and inhibiting tumor growth. In this extensive review, we attempt to comprehensively understand the effects of morphine and summarize both its positive and negative influences on various aspects of tumors, including tumor growth, angiogenesis, metastasis, inflammation, and immunomodulation.



Cancer immunotherapy of patients with HIV infection

Abstract

Cancer immunotherapy with antibodies against immune checkpoints has made impressive advances in the last several years. The most relevant drugs target programmed cell death 1 (PD-1) expressed on T cells or its ligand, the programmed cell death ligand 1 (PD-L1), expressed on cancer cells, and cytotoxic T lymphocyte-associated protein 4 (CTLA-4). Unfortunately, cancer patients with HIV infection are usually excluded from cancer clinical trials, because there are concerns about the safety and the anti-tumoral activity of these novel therapies in patients with HIV infection. Several retrospective studies and some case reports now support the notion that antibodies against immune checkpoints are safe and active in cancer patients with HIV infection, but prospective data in these patients are lacking. In addition, signs of antiviral activity with increase in CD4 T cell counts, plasma viremia reduction or decrease in the viral reservoir have been reported in some of the patients treated, although no patient achieved a complete clearance of the viral reservoir. Here we briefly summarize all clinical cases reported in the literature, as well as ongoing clinical trials testing novel immunotherapy drugs in cancer patients with HIV infection.



The protective effects of melatonin on blood cell counts of rectal cancer patients following radio-chemotherapy: a randomized controlled trial

Abstract

Purpose

We aimed to examine the radioprotective effects of melatonin on the blood cell counts of patients with rectum cancer undergoing radiotherapy.

Materials and methods

This double-blind placebo-controlled study was conducted on 60 rectal cancer patients who were referred to Rajaii Hospital of Babolsar, Iran. An equal number of patients were randomly assigned to the control group which received placebo and study group which received 20 mg melatonin a day as an intervention. The melatonin was administered 5 days a week for 28 days. Blood samples were taken before melatonin received on day 1 and also day 28; then, to measure the changes in blood cell counts representing our primary outcomes, the samples were analyzed by Sysmex K810i auto-analyzer.

Results

Our results showed that the platelet, white blood cells, lymphocyte, and neutrophil population reduction induced by radiotherapy were slighter or even insignificant in melatonin recipients compared to control. However, the difference between red blood cells in both groups was not significant.

Conclusion

Our results are indicating that melatonin could prevent or minimize the unfavorable effects of radiotherapy on blood cell count reductions by attenuating the adverse influence of radiation, probably through stimulation of cellular antioxidant potential as previously reported in animal models.

Iranian Registry of Clinical Trials (IRCT)

Registry No. IRCT2016021626586N1.



Oncologic outcomes of nephron-sparing surgery in patients with T1 multifocal renal cell carcinoma

Abstract

Objective

This study is performed to explore the pathological characteristics and oncologic outcomes of T1 multifocal renal cell carcinoma (RCC).

Methods

The clinical data of 600 patients (442 males and 158 females) between the age of 29 and 73 years, diagnosed with T1 RCC were collected from three hospitals in China, out of which 421 cases had undergone nephron-sparing surgery (NSS) and 179 cases had undergone radical nephrectomy (RN) between December 2010 and January 2015.

Results

Tumor was identified with multifocality in 32 patients (5.33%), out of which 21 were set to receive NSS, and 11 to receive RN, respectively; 21 cases of clear cell tumor, 8 cases of papillary tumor, 1 case of chromophobe tumor and 2 cases of Xp.11.2 translocation RCC. Among 568 cases of monofocal tumors, 400 patients underwent NSS, and the remaining 168 patients underwent RN, respectively. After a median follow-up of 5 years, 13 patients were found with recurrent tumors out of those who had undergone NSS, 11 with monofocal tumors and 2 with multifocal tumors containing satellite tumor nodules (p = 0.13). Out of the 32 individuals with multifocal RCC, 4 cases were reported to have died of cancer, 2 of NSS and 2 of RN. From these findings, the cancer-specific survival for NSS and RN was estimated to be 90.48% and 81.82%, respectively (p = 0.48).

Conclusion

The findings from the study suggested that there were pathological differences in multifocal renal tumors, and that papillary carcinoma may be more common than clear cell carcinoma. The recurrence rate and survival rate of multifocal RCC were similar to monofocal tumors. Tumor recurrence may be related to satellite tumor nodules, which can only be detected once surgery is performed.



Metronomic oral vinorelbine for the treatment of advanced non-small cell lung cancer: a multicenter international retrospective analysis

Abstract

Purpose

Metronomic oral vinorelbine (MOV) could be a treatment option for unfit patients with advanced non-small cell lung cancer (NSCLC) based on its safety profile and high patient compliance.

Methods

We retrospectively collected data on 270 patients [median age 76 (range 48–92) years, M/F 204/66, PS 0 (27)/1 (110)/≥ 2 (133), median of 3 serious comorbidities] with stage IIIB-IV NSCLC treated with MOV as first (T1) (67%), second (T2) (19%) or subsequent (T3) (14%) line. Schedules consisted of vinorelbine 50 mg (138), 40 mg (68) or 30 mg (64) three times a week continuously.

Results

Patients received an overall median of 6 (range 1–25) cycles with a total of 1253 cycles delivered. The overall response rate was 17.8% with 46 partial and 2 complete responses and 119 patients (44.1%) experienced stable disease > 12 weeks with an overall disease control rate of 61.9%. Median overall time to progression was 5 (range 1–21) months [T1 7 (1–21), T2 5.5 (1–19) and T3 4 (1–19) months] and median overall survival 9 (range 1–36) months [T1 10 (1–31), T2 8 (1–36) and T3 6.5 (2–29) months]. Treatment was extremely well tolerated with 2% (25/1253) G3/4 toxicity (mainly G3 fatigue and anemia) and no toxic deaths. We observed the longer OS 14 (range 7–36) months in a subset of squamous NSCLC patients receiving immunotherapy after metronomic oral vinorelbine.

Conclusion

We confirmed MOV as an extremely safe treatment in a large real world population of advanced NSCLC with an interesting activity mainly consisting of long-term disease stabilization. We speculate the possibility of a synergistic effect with subsequent immunotherapy.



Alexandros Sfakianakis
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
6948891480

Parasitology Research

First report of a Hypoderma diana infestation in alpaca ( Vicugna pacos ) in Germany

Abstract

Hypoderma larva was removed from a painful swelling in the lumbar region of a 17-month-old male alpaca kept on a farm in the Brandenburg district, eastern Germany. Morphological analysis and sequencing of the 18S rRNA gene demonstrated it was a second instar larvae of Hypoderma diana. The main host of H. diana is the roe deer (Capreolus capreolus). This is the first description of hypodermosis caused by H. diana in a camelid species.



Is species identification of Echinostoma revolutum using mitochondrial DNA barcoding feasible with high-resolution melting analysis?

Abstract

The taxonomic evaluation of Echinostoma species is controversial. Echinostoma species are recognized as complex, leading to problems associated with accurate identification of these species. The aim of this study was to test the feasibility of using DNA barcoding of cytochrome c oxidase subunit I (COI) and NADH dehydrogenase subunit 1 (ND1) conjugated with high-resolution melting (HRM) analysis to identify Echinostoma revolutum. HRM using COI and ND1 was unable to differentiate between species in the "revolutum complex" but did distinguish between two isolates of 37-collar-spined echinostome species, including E. revolutum (Asian lineage) and Echinostoma sp. A from different genera, e.g., Hypoderaeum conoideumHaplorchoides mehraiFasciola gigantica, and Thapariella anastomusa, based on the Tm values derived from HRM analysis. Through phylogenetic analysis, a new clade of the cryptic species known as Echinostoma sp. A was identified. In addition, we found that the E. revolutum clade of ND1 phylogeny obtained from the Thailand strain was from a different lineage than the Eurasian lineage. These findings reveal the complexity of the clade, which is composed of 37-collar-spined echinostome species found in Southeast Asia. Taken together, the systematic aspects of the complex revolutum group are in need of extensive investigation by integrating morphological, biological, and molecular features in order to clarify them, particularly in Southeast Asia.



Protein extract from head-foot tissue of Oncomelania hupensis promotes the growth and development of mother sporocysts of Schistosoma japonicum via upregulation of parasite aldolase gene

Abstract

Previous studies showed that protein extract from head-foot tissue of Oncomelania hupensis (Ohupensis) (PhfO), when cocultured with mother sporocysts of Schistosoma japonicum (Sjaponicum), was beneficial for parasite's growth and development but the underlying mechanisms remain unclear. One possible strategy for PhfO to promote the growth and development of mother sporocysts of Sjaponicum is to upregulate parasite's survival genes. Fructose-1,6-bisphosphate aldolase (ALD), an essential enzyme of glycometabolism in the energy metabolism process, plays an important role in the survival and the growth and development of schistosomes. Using an in vitro coculture system, in this study, we analyzed the potential involvement of the ald gene in the growth and development of mother sporocysts of Sjaponicum following coculture with PhfO. We found that coculture with PhfO promoted the growth and development and the survival of mother sporocysts, and increased parasites' ATP consumption level. Mother sporocysts cocultured with PhfO showed a significantly increased expression of the ald gene at both RNA and protein levels. The ALD protein mainly expressed in the cytoplasm of mother sporocysts. Knockdown of ald gene in parasites decreased the ALD protein expression and the ATP consumption level, suppressed the growth and development, and attenuated the survival of mother sporocysts. In ald knockdown mother sporocysts, the effects of PhfO on the ALD expression, the ATP consumption level, the growth and development, and the survival of larvae were significantly abolished. Therefore, the data suggest that PhfO could promote the growth and development, and the survival of mother sporocysts of Sjaponicum via upregulating the expression of the ald gene.



Novel data from Italian Vermamoeba vermiformis isolates from multiple sources add to genetic diversity within the genus

Abstract

Vermamoeba vermiformis represents one of the most common free-living amoebae identified in worldwide environmental surveys. We analyzed 56 water samples with varying characteristics, including temperature and the particular settings in which humans may be exposed to water, plus one corneal scraping from a keratitis patient, with the following aims: (i) to investigate the presence of V. vermiformis; (ii) to identify the isolate subtypes; (iii) to place the Italian isolates in the broader picture of the genetic diversity within V. vermiformis. Twenty-two isolates were identified upon culturing and sequencing of > 600 bp in the 18S ribosomal RNA (rRNA) gene sequence, bringing to 27 the number of sequences recovered from Italian sources. By adding deposited sequences, we assembled a dataset of 74 isolates. Three of our isolates were characterized by allelic code 7-5-1-1, never reported before, and two showed 100% identity with an uncultured eukaryote and carried the 719T>C variant. We show that the variable segments E5, E3, F, and G convey most of the information on diversity, enabling the clustering of the isolates in a replicable fashion. The presence of different strains in natural thermal waters and in distribution systems indicated heterogeneity of the amoebic populations. Also, ours and the only other sequence from human infection were mapped in different clades. Overall, we enlarged the repertoire of single nucleotide and indel variants and the list of allelic codes, proceeding one step further in the description of the diversity within the genus.



Efficacy of silver nanoparticles against the adults and eggs of monogenean parasites of fish

Abstract

Monogeneans are a diverse group of parasites that are commonly found on fish. Some monogenean species are highly pathogenic to cultured fish. The present study aimed to determine the in vitro anthelmintic effect of silver nanoparticles (AgNPs) against adults and eggs of monogeneans in freshwater using Cichlidogyrus spp. as a model organism. We tested two types of AgNPs with different synthesis methodologies and size diameters: ARGOVIT (35 nm) and UTSA (1–3 nm) nanoparticles. Damage to the parasite tegument was observed by scanning electron microscopy. UTSA AgNPs were more effective than ARGOVIT; in both cases, there was a concentration-dependent effect. A concentration of 36 μg/L UTSA AgNPs for 1 h was 100% effective against eggs and adult parasites, causing swelling, loss of corrugations, and disruption of the parasite's tegument. This is an interesting result considering that monogenean eggs are typically tolerant to antiparasite drugs and chemical agents. To the best of our knowledge, no previous reports have assessed the effect of AgNPs on any metazoan parasites of fish. Therefore, the present work provides a basis for future research on the control of fish parasite diseases.



Cloning, expression, characterization, and immunological properties of citrate synthase from Echinococcus granulosus

Abstract

The larval stages of the tapeworm Echinococcus granulosus (Cestoda: Taeniidae) are the causative agent of cystic echinococcosis, one of the most important parasitic zoonoses worldwide. E. granulosus has a complete pathway for the tricarboxylic acid cycle (TCA), in which citrate synthase (CS) is the key enzyme. Here, we cloned and expressed CS from E. granulosus (Eg-CS) and report its molecular characterization. The localization of this protein during different developmental stages and mRNA expression patterns during H2O2 treatment were determined. We found that Eg-CS is a highly conserved protein, consisting of 466 amino acids. In western blotting assays, recombinant Eg-CS (rEg-CS) reacted with E. granulosus-positive sheep sera and anti-rEg-CS rabbit sera, indicating that Eg-CS has good antigenicity and immunoreactivity. Localization studies, performed using immunohistochemistry, showed that Eg-CS is ubiquitously expressed in the larva, germinal layer, and adult worm sections of E. granulosus. Eg-CS mRNA expression levels increased following H2O2 exposure. In conclusion, citrate synthase might be involved in the metabolic process in E. granulosus. An assessment of the serodiagnostic potential of rEg-CS based on indirect ELISA showed that, although sensitivity (93.55%) and specificity (80.49%) are high, cross-reactivity with other parasites precludes its use as a diagnostic antigen.



Detection and genotypic characterization of Toxoplasma gondii DNA within the milk of Mongolian livestock

Abstract

Toxoplasma gondii is a global, zoonotic parasite capable of infecting any warm-blooded host. Toxoplasmosis can cause a variety of illnesses including abortions and congenital defects in humans, sheep, and goats. Congenital toxoplasmosis is considered to have the highest global disease burden of any foodborne illness in humans. This study examined the potential role of milk as a route of T. gondii transmission between livestock and humans within Mongolian herders, a little-studied population which relies heavily on animals. Milk of Mongolian sheep, goats and Bactrian camels was tested for the presence of T. gondii DNA, and a survey was conducted to ascertain what behavioral and environmental factors were present that might potentiate T. gondii infection within these Mongolian communities. T. gondii DNA was detected in samples from one sheep and five camels. Sequence analysis of DNA from camel milk revealed that two were from potentially virulent T. gondii genotypes. This has implications for public health in the region, as milk is an extremely important source of nutrition and our survey results imply that some people believe consumption of raw camel milk carries health benefits. This is the first report of T. gondii DNA in Bactrian camel milk as well as the first genotypic characterization of T. gondii within Mongolia.



Synthesis and in vitro activity of new biguanide-containing dendrimers on pathogenic isolates of Acanthamoeba polyphaga and Acanthamoeba griffini

Abstract

The genus Acanthamoeba can cause Acanthamoeba keratitis (AK) and granulomatous amoebic encephalitis (GAE). The treatment of these illnesses is hampered by the existence of a resistance stage that many times causes infection relapses. In an attempt to add new agents to our chemotherapeutic arsenal against acanthamebiasis, two Acanthamoeba isolates were treated in vitro with newly synthesized biguanide dendrimers. Trophozoite viability analysis and ultrastructural studies showed that dendrimers prevent encystment by lysing the cellular membrane of the amoeba. Moreover, one of the dendrimers showed low toxicity when tested on mammalian cell cultures, which suggest that it might be eventually used as an amoebicidal drug or as a disinfection compound in contact lens solutions.



Molecular characterization of a new Trypanosoma (Megatrypanum) theileri isolate supports the two main phylogenetic lineages of this species in Japanese cattle

Abstract

Trypanosoma (Megatrypanum) theileri is a cosmopolitan, usually non-pathogenic, trypanosome of cattle transmitted by blood-sucking arthropods, mainly tabanid flies. Several T. theileri strains isolated from domestic and wild ruminants via co-culturing with mammalian feeder cells or blood cells have been characterized morphologically and genetically. Here, we cultured a new trypanosome isolate from a Holstein cow in Hokkaido, Japan, and performed morphological and molecular characterization studies. The new isolate (Obihiro strain) was co-cultivated with Madin–Darby bovine kidney (MDBK) cells in GIT medium supplemented with 10% fetal bovine serum. Trypomastigotes and epimastigotes, but not intracellular parasites, were identified in the culture. Analysis of the V7-V8 region of 18S rRNA sequences showed that the Obihiro strain is positioned within the subgenus Megatrypanum. A dendrogram based on whole internal transcribed spacer rDNA sequence showed that the Obihiro strain clustered in the lineage TthII together with the Japanese isolates of T. theileri, Esashi 9, and Esashi 12, and isolates from Zambia and the USA. T. theileri of the KM strain and a T. theileri-like trypanosome isolated from deer (TSD1 strain) clustered in the lineage TthI, separate from the Obihiro strain. Based on a partial cathepsin L-like protein gene analysis, the Obihiro strain clustered with isolates of the TthIIF genotype, which includes T. theileri from Vietnam, Sri Lanka, and Brazil. Our analyses of the T. theileri Obihiro strain provide relevant insights into its genetic diversity in Japanese cattle and corroborate the host specificity of cattle and deer trypanosomes of the subgenus Megatrypanum.



CpG enhances the immunogenicity of heterologous DNA-prime/protein-boost vaccination with the heavy chain myosin of Brugia malayi in BALB/c mice

Abstract

The recombinant heavy chain myosin of Brugia malayi (Bm-Myo) has earlier been reported as a potent vaccine candidate in our lab. Subsequently, we further enhanced its efficacy employing heterologous DNA prime/protein boost (Myo-pcD+Bm-Myo) immunization approach that produced superior immune-protection than protein or DNA vaccination. In the present study, we evaluated the efficacy of heterologous prime boost vaccination in combination with CpG, synthetic oligodeoxynucleotides (ODN) adjuvant in BALB/c mice. The results showed that CpG/Myo-pcD+Bm-Myo conferred 84.5 ± 0.62% protection against B. malayi infective larval challenge which was considerably higher than Myo-pcD+Bm-Myo (75.6 ± 1.10%) following immunization. Although, both the formulations of immunization elicited robust production of specific IgG antibody and their isotypes (IgG1, IgG2a, IgG2b, and IgG3); however, CpG/Myo-pcD+Bm-Myo predominantly enhanced the level of IgG2a suggesting Th1 biased immune response in presence of CpG. Furthermore, spleen isolated from mice that immunized with CpG/Myo-pcD+Bm-Myo had greater accumulation of CD4+, CD8+, and CD19+ B cells and there was an augmented expression of co-stimulatory molecules CD40, CD86 on host dendritic cells (DCs). In contrast to Myo-pcD+Bm-Myo group, the splenocytes of CpG/Myo-pcD+Bm-Myo immunized mice developed comparatively higher pro-inflammatory cytokines IL-2 and IFN-γ leaving anti-inflammatory cytokine levels unchanged. Moreover, CpG formulation also upregulated the RNA expression of IL-12 and TNF-α in spleenocytes. The current findings suggest that the use of CpG would be more advantageous as an adjuvant predominantly in DNA/protein prime boost vaccine against Bm-Myo and presumably also for filarial infection.



Alexandros Sfakianakis
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
6948891480

Human Ecology

Historical Ecologies of Pastoralist Overgrazing in Kenya: Long-Term Perspectives on Cause and Effect

Abstract

The spectre of 'overgrazing' looms large in historical and political narratives of ecological degradation in savannah ecosystems. While pastoral exploitation is a conspicuous driver of landscape variability and modification, assumptions that such change is inevitable or necessarily negative deserve to be continuously evaluated and challenged. With reference to three case studies from Kenya – the Laikipia Plateau, the Lake Baringo basin, and the Amboseli ecosystem – we argue that the impacts of pastoralism are contingent on the diachronic interactions of locally specific environmental, political, and cultural conditions. The impacts of the compression of rangelands and restrictions on herd mobility driven by misguided conservation and economic policies are emphasised over outdated notions of pastoralist inefficiency. We review the application of 'overgrazing' in interpretations of the archaeological record and assess its relevance for how we interpret past socio-environmental dynamics. Any discussion of overgrazing, or any form of human-environment interaction, must acknowledge spatio-temporal context and account for historical variability in landscape ontogenies.



Local Fishers' Knowledge of Target and Incidental Seahorse Catch in Southern Vietnam

Abstract

Many vulnerable marine species are caught in small-scale fisheries that lack long-term records, thereby limiting the development of effective evidence-based management measures. To uncover recent trends in fish landings and value in the absence of historical data, we interviewed 77 fishers and five buyers on Phu Quoc Island in Southern Vietnam regarding their current and past fishing practices, with a focus on seahorse catches. Seahorses (Hippocampus spp.) are caught using multiple gear types (including trawls, crab nets, and compressor diving) and have both cultural and financial value. Most fishers catch seahorses incidentally, though 14 targeted them and made the majority of their income from their sale. Fishers reported that seahorse catch rates decreased by 86–95% from 2004 to 2014, while landed value simultaneously increased by 534%. If these reports are accurate, seahorse fishing on Phu Quoc is unsustainable and requires immediate management controls.



Correction to: Hiding in the Dark: Local Ecological Knowledge about Slow Loris in Sarawak Sheds Light on Relationships between Human Populations and Wild Animals

The article Hiding in the dark: Local ecological knowledge about slow loris in Sarawak sheds light on relationships between human populations and wild animals, written by Priscillia Miard, K. A. I. Nekaris and Hatta Ramlee, was originally published electronically with open access.



Pumping Yemen Dry: A History of Yemen's Water Crisis

Abstract

Yemen, located on the southwestern corner of the Arabian Peninsula, is one of the most water-scarce countries in the world. Quite apart from the continuing catastrophic conflict, the massive overdraw of existing groundwater due to unregulated drilling of tube wells since the 1970s has created a major water crisis that affects the future of the county's estimated 28 million people. While once known for its rich traditions of agriculture due to its extensive highland terrace systems, spate flow and runoff water harvesting, Yemen is now food insecure, relying almost entirely on food imports. This article surveys the range of water resources in Yemen and their sustainability in light of climate change predictions. I examined government and development aid reports to highlight the causes of the water crisis and the failure of previous governments to resolve it. The situation is even more critical today, given the ongoing war between a Saudi-led coalition and a Huthi alliance that has created one of the worst humanitarian crises in the world. I conclude with priorities for mitigating the water crisis and promoting sustainable agriculture for Yemen's post-conflict future.



Greg Mitman, Marco Armiero, Robert S. Emmett, Editors. Future Remains: a Cabinet of Curiosities for the Anthropocene


Andy Bruno: The Nature of Soviet Power. An Arctic Environmental History


Divine Placebo: Health and the Evolution of Religion

Abstract

In this paper, I draw on knowledge from several disciplines to explicate the potential evolutionary significance of health effects of religiosity. I present three main observations. First, traditional methods of religious healers seldom rely on active remedies, but instead focus on lifestyle changes or spiritual healing practices that best can be described as placebo methods. Second, actual health effects of religiosity are thus mainly traceable to effects from a regulated lifestyle, social support networks, or placebo effects. Third, there are clear parallels between religious healing practices and currently identified methods that induce placebo effects. Physiological mechanisms identified to lie behind placebo effects activate the body's own coping strategies and healing responses. In combination, lifestyle, social support networks, and placebo effects thus produce both actual and perceived health effects of religiosity. This may have played an important role in the evolution and diffusion of religion through two main pathways. First, any real positive health effects of religiosity would have provided a direct biological advantage. Second, any perceived health effects, both positive and negative, would further have provided a unique selling point for 'religiosity' per se. Actual and perceived health effects of religiosity may therefore have played an underestimated role during the evolution of religiosity through both biological and cultural pathways.



Spatial Distribution and Abundance of Acacia mangium on Indigenous Lands in the Serra da Lua Region, Roraima State, Brazil


Variability and Change in Maasai Views of Wildlife and the Implications for Conservation

Abstract

Surveys conducted across sections of the pastoral Maasai of Kenya show a wide variety of values for wildlife, ranging from utility and medicinal uses to environmental indicators, commerce, and tourism. Attitudes toward wildlife are highly variable, depending on perceived threats and uses. Large carnivores and herbivores pose the greatest threats to people, livestock, and crops, but also have many positive values. Attitudes vary with gender, age, education, and land holding, but most of all with the source of livelihood and location, which bears on relative abundance of useful and threatening species. Traditional pastoral practices and cultural views that accommodated coexistence between livestock and wildlife are dwindling and being replaced by new values and sensibilities as pastoral practices give way to new livelihoods, lifestyles, and aspirations. Human-wildlife conflict has grown with the transition from mobile pastoralism to sedentary livelihoods. Unless the new values offset the loss of traditional values, wildlife will continue to decline. New wildlife-based livelihoods show that continued coexistence is possible despite the changes underway.



Exploring Diversity in Forest Management Outlooks of African American Family Forest Landowners for Ensuring Sustainability of Forestry Resources in the Southern United States

Abstract

African American forest landowners in the southern United States (US) are typically considered a homogenous group in current studies. Our research challenges this assumption by identifying four distinct forest management outlooks among African American forest landowners using Q Method. Sustainable Harvesters focus on balanced land use with a long-term outlook; Back 40ers appreciate the presence of forests on their property but focus on alternative land use; Land Use Pragmatists are also interested in alternative land use and primarily view forest as an economic resource; Recreationalists value their forestland not for economic value but as a place for personal use. Finally, Indecisive landowners are not sure about how to best manage their forestland. We argue that an understanding of different forest management outlooks will improve sustainable forest management by better targeting extension and outreach efforts for African American forest landowners.



Alexandros Sfakianakis
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
6948891480

Critical Care Medicine,Medicine & Science in Sports & Exercise,Anaesthesiology,Allergy and Clinical Immunology,Biochimica et Biophysica Acta,Ear and Hearing

Ear and Hearing
Use of Commercial Virtual Reality Technology to Assess Verticality Perception in Static and Dynamic Visual Backgrounds
Objectives: The Subjective Visual Vertical (SVV) test and the closely related Rod and Disk Test (RDT) are measures of perceived verticality measured in static and dynamic visual backgrounds. However, the equipment used for these tests is variable across clinics and is often too expensive or too primitive to be appropriate for widespread use. Commercial virtual reality technology, which is now widely available, may provide a more suitable alternative for collecting these measures in clinical populations. This study was designed to investigate verticality perception in symptomatic patients using a modified RDT paradigm administered through a head-mounted display (HMD). Design: A group of adult patients referred by a physician for vestibular testing based on the presence of dizziness symptoms and a group of healthy adults without dizziness symptoms were included. We investigated degree of visual dependence in both groups by measuring SVV as a function of kinematic changes to the visual background. Results: When a dynamic background was introduced into the HMD to simulate the RDT, significantly greater shifts in SVV were found for the patient population than for the control population. In patients referred for vestibular testing, the SVV measured with the HMD was significantly correlated with traditional measures of SVV collected in a rotary chair when accounting for head tilt. Conclusions: This study provides initial proof of concept evidence that reliable SVV measures in static and dynamic visual backgrounds can be obtained using a low-cost commercial HMD system. This initial evidence also suggests that this tool can distinguish individuals with dizziness symptomatology based on SVV performance in dynamic visual backgrounds. Acknowledgment: The work was supported by Defense Health Affairs in support of the Army Hearing Program. The views expressed in this article are those of the author and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. The identification of specific products or scientific instrumentation does not constitute endorsement or implied endorsement on the part of the author, DoD, or any component agency. The views expressed in this presentation are those of the author and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. The authors have no conflicts of interest to disclose. Received March 27, 2018; accepted March 2, 2019. Address for correspondence: Ashley Zaleski-King, Walter Reed National Military Medical Center (WRNMMC), 8901 Rockville Pike, Bethesda, MD 20889, USA. E-mail: ashley.c.king8.civ@mail.mil Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Predicting Speech-in-Noise Deficits from the Audiogram
Objectives: In occupations that involve hearing critical tasks, individuals need to undergo periodic hearing screenings to ensure that they have not developed hearing losses that could impair their ability to safely and effectively perform their jobs. Most periodic hearing screenings are limited to pure-tone audiograms, but in many cases, the ability to understand speech in noisy environments may be more important to functional job performance than the ability to detect quiet sounds. The ability to use audiometric threshold data to identify individuals with poor speech-in-noise performance is of particular interest to the U.S. military, which has an ongoing responsibility to ensure that its service members (SMs) have the hearing abilities they require to accomplish their mission. This work investigates the development of optimal strategies for identifying individuals with poor speech-in-noise performance from the audiogram. Design: Data from 5487 individuals were used to evaluate a range of classifiers, based exclusively on the pure-tone audiogram, for identifying individuals who have deficits in understanding speech in noise. The classifiers evaluated were based on generalized linear models (GLMs), the speech intelligibility index (SII), binary threshold criteria, and current standards used by the U.S. military. The classifiers were evaluated in a detection theoretic framework where the sensitivity and specificity of the classifiers were quantified. In addition to the performance of these classifiers for identifying individuals with deficits understanding speech in noise, data from 500,733 U.S. Army SMs were used to understand how the classifiers would affect the number of SMs being referred for additional testing. Results: A classifier based on binary threshold criteria that was identified through an iterative search procedure outperformed a classifier based on the SII and ones based on GLMs with large numbers of fitted parameters. This suggests that the saturating nature of the SII is important, but that the weights of frequency channels are not optimal for identifying individuals with deficits understanding speech in noise. It is possible that a highly complicated model with many free parameters could outperform the classifiers considered here, but there was only a modest difference between the performance of a classifier based on a GLM with 26 fitted parameters and one based on a simple all-frequency pure-tone average. This suggests that the details of the audiogram are a relatively insensitive predictor of performance in speech-in-noise tasks. Conclusions: The best classifier identified in this study, which was a binary threshold classifier derived from an iterative search process, does appear to reliably outperform the current thresholds criteria used by the U.S. military to identify individuals with abnormally poor speech-in-noise performance, both in terms of fewer false alarms and a greater hit rate. Substantial improvements in the ability to detect SMs with impaired speech-in-noise performance can likely only be obtained by adding some form of speech-in-noise testing to the hearing monitoring program. While the improvements were modest, the overall benefit of adopting the proposed classifier is likely substantial given the number of SMs enrolled in U.S. military hearing conservation and readiness programs. ACKNOWLEDGMENTS: The authors thank Dr. Gary Kidd for sharing his TDT data and Dr. Ken Grant for sharing his SPRINT data. The authors also thank Kari Buchanan and the Hearing Center of Excellence for sharing the DOEHRS-HC data. All authors contributed equally to this work. All authors were involved in the data analysis and discussed the results and implications and commented on the manuscript at all stages. The views expressed in this article are those of the author and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. The authors have no conflicts of interest to disclose. Received December 3, 2017; accepted March 14, 2019. Address for correspondence: Daniel E. Shub, National Military Audiology and Speech Center, Walter Reed National Military Medical Center, 4954 North Palmer Road, Bethesda, MD 20889, USA. E-mail: daniel.e.shub.civ@mail.mil Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Children With Normal Hearing Are Efficient Users of Fundamental Frequency and Vocal Tract Length Cues for Voice Discrimination
Background: The ability to discriminate between talkers assists listeners in understanding speech in a multitalker environment. This ability has been shown to be influenced by sensory processing of vocal acoustic cues, such as fundamental frequency (F0) and formant frequencies that reflect the listener's vocal tract length (VTL), and by cognitive processes, such as attention and memory. It is, therefore, suggested that children who exhibit immature sensory and/or cognitive processing will demonstrate poor voice discrimination (VD) compared with young adults. Moreover, greater difficulties in VD may be associated with spectral degradation as in children with cochlear implants. Objectives: The aim of this study was as follows: (1) to assess the use of F0 cues, VTL cues, and the combination of both cues for VD in normal-hearing (NH) school-age children and to compare their performance with that of NH adults; (2) to assess the influence of spectral degradation by means of vocoded speech on the use of F0 and VTL cues for VD in NH children; and (3) to assess the contribution of attention, working memory, and nonverbal reasoning to performance. Design: Forty-one children, 8 to 11 years of age, were tested with nonvocoded stimuli. Twenty-one of them were also tested with eight-channel, noise-vocoded stimuli. Twenty-one young adults (18 to 35 years) were tested for comparison. A three-interval, three-alternative forced-choice paradigm with an adaptive tracking procedure was used to estimate the difference limens (DLs) for VD when F0, VTL, and F0 + VTL were manipulated separately. Auditory memory, visual attention, and nonverbal reasoning were assessed for all participants. Results: (a) Children' F0 and VTL discrimination abilities were comparable to those of adults, suggesting that most school-age children utilize both cues effectively for VD. (b) Children's VD was associated with trail making test scores that assessed visual attention abilities and speed of processing, possibly reflecting their need to recruit cognitive resources for the task. (c) Best DLs were achieved for the combined (F0 + VTL) manipulation for both children and adults, suggesting that children at this age are already capable of integrating spectral and temporal cues. (d) Both children and adults found the VTL manipulations more beneficial for VD compared with the F0 manipulations, suggesting that formant frequencies are more reliable for identifying a specific speaker than F0. (e) Poorer DLs were achieved with the vocoded stimuli, though the children maintained similar thresholds and pattern of performance among manipulations as the adults. Conclusions: The present study is the first to assess the contribution of F0, VTL, and the combined F0 + VTL to the discrimination of speakers in school-age children. The findings support the notion that many NH school-age children have effective spectral and temporal coding mechanisms that allow sufficient VD, even in the presence of spectrally degraded information. These results may challenge the notion that immature sensory processing underlies poor listening abilities in children, further implying that other processing mechanisms contribute to their difficulties to understand speech in a multitalker environment. These outcomes may also provide insight into VD processes of children under listening conditions that are similar to cochlear implant users. ACKNOWLEDGMENTS: The authors wish to acknowledge the contribution of the following undergraduate students from the Department of Communication Disorders at Tel Aviv University for assisting in data collection: Feigi Raiter, Feigi Grinvald, Shani Rabia, Adi Amsalem, Miri Rotem, Lea pantiat, Daniel Lex Rabinovitch, and Orpaz Shariki. The authors wish to thank Steyer grant (School of Health Professions, Tel-Aviv University) for their financial support. The authors specially thank all the adults and children who participated in the present study. All authors contributed to this work to a significant extent. All authors have read the article and agreed to submit it for publication after discussing the results and implications and commented on the article at all stages. All authors are, therefore, responsible for the reported research and have approved the final article as submitted. The authors have no conflicts of interest to declare. Received September 30, 2018; accepted March 17, 2019. Address for correspondence: Yael Zaltz, Department of Communication Disorders, The Stanley Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel. E-mail: yaelzaltz@gmail.com Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Switching Streams Across Ears to Evaluate Informational Masking of Speech-on-Speech
Objectives: This study aimed to evaluate the informational component of speech-on-speech masking. Speech perception in the presence of a competing talker involves not only informational masking (IM) but also a number of masking processes involving interaction of masker and target energy in the auditory periphery. Such peripherally generated masking can be eliminated by presenting the target and masker in opposite ears (dichotically). However, this also reduces IM by providing listeners with lateralization cues that support spatial release from masking (SRM). In tonal sequences, IM can be isolated by rapidly switching the lateralization of dichotic target and masker streams across the ears, presumably producing ambiguous spatial percepts that interfere with SRM. However, it is not clear whether this technique works with speech materials. Design: Speech reception thresholds (SRTs) were measured in 17 young normal-hearing adults for sentences produced by a female talker in the presence of a competing male talker under three different conditions: diotic (target and masker in both ears), dichotic, and dichotic but switching the target and masker streams across the ears. Because switching rate and signal coherence were expected to influence the amount of IM observed, these two factors varied across conditions. When switches occurred, they were either at word boundaries or periodically (every 116 msec) and either with or without a brief gap (84 msec) at every switch point. In addition, SRTs were measured in a quiet condition to rule out audibility as a limiting factor. Results: SRTs were poorer for the four switching dichotic conditions than for the nonswitching dichotic condition, but better than for the diotic condition. Periodic switches without gaps resulted in the worst SRTs compared to the other switch conditions, thus maximizing IM. Conclusions: These findings suggest that periodically switching the target and masker streams across the ears (without gaps) was the most efficient in disrupting SRM. Thus, this approach can be used in experiments that seek a relatively pure measure of IM, and could be readily extended to translational research. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal's Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Rachel Ellinger and Andrea Cunningham for their help with data collection. This work was supported by NIH R01 DC 60014 grant awarded to P. S., and an iCARE ITN (FP7-607139) European fellowship to A. C. The authors have no conflict of interest to disclose. Received June 4, 2018; accepted March 17, 2019. Address for correspondence: Axelle Calcus, Ecole Normale Supérieure, 29 rue d'Ulm, 75005 Paris, France. E-mail: axelle.calcus@ens.fr Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Genetic Inheritance of Late-Onset, Down-Sloping Hearing Loss and Its Implications for Auditory Rehabilitation
Objectives: Late-onset, down-sloping sensorineural hearing loss has many genetic and nongenetic etiologies, but the proportion of this commonly encountered type of hearing loss attributable to genetic causes is not well known. In this study, the authors performed genetic analysis using next-generation sequencing techniques in patients showing late-onset, down-sloping sensorineural hearing loss with preserved low-frequency hearing, and investigated the clinical implications of the variants identified. Design: From a cohort of patients with hearing loss at a tertiary referral hospital, 18 unrelated probands with down-sloping sensorineural hearing loss of late onset were included in this study. Down-sloping hearing loss was defined as a mean low-frequency threshold at 250 Hz and 500 Hz less than or equal to 40 dB HL and a mean high-frequency threshold at 1, 2, and 4 kHz greater than 40 dB HL. The authors performed whole-exome sequencing and segregation analysis to identify the genetic causes and evaluated the outcomes of auditory rehabilitation in the patients. Results: There were nine simplex and nine multiplex families included, in which the causative variants were found in six of 18 probands, demonstrating a detection rate of 33.3%. Various types of variants, including five novel and three known variants, were detected in the MYH14, MYH9, USH2A, COL11A2, and TMPRSS3 genes. The outcome of cochlear and middle ear implants in patients identified with pathogenic variants was satisfactory. There was no statistically significant difference between pathogenic variant-positive and pathogenic variant-negative groups in terms of onset age, family history of hearing loss, pure-tone threshold, or speech discrimination scores. Conclusions: The proportion of patients with late-onset, down-sloping hearing loss identified with potentially causative variants was unexpectedly high. Identification of the causative variants will offer insights on hearing loss progression and prognosis regarding various modes of auditory rehabilitation, as well as possible concomitant syndromic features. ACKNOWLEDGMENTS: This study was provided with bioresources from the National Biobank of Korea, Centers for Disease Control and Prevention, Republic of Korea (4845-301, 4851-302 and -307). This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (2015R1A1A1A05001472 to S.M.H., 2017M3A9E8029714 to J.J.S., 2014M3A9D5A01073865 to C.J.Y., 2018R1A5A2025079 to H.Y.G.) M.H.S., J.J., H.Y.G., and J.Y.C. designed the study conception. J.J, J.H.R., H.J.C., and J.S.L. performed the experiment. M.H.S., J.J., H.J.L., and B.N. analyzed and interpreted the data. M.H.S., J.J., H.J.L., B.N., H.Y.G., and J.H.R. wrote the article. The authors have no conflicts of interest to disclose. Received July 25, 2018; accepted March 2, 2019. Address for correspondence: Jae Young Choi, Department of Otorhino laryngology, Yonsei University College of Medicine, 50–1 Yonsei-ro, Seodaemun-gu, Seoul 120–752, Republic of Korea. E-mail: jychoi@yuhs.ac Address for correspondence: Heon Yung Gee, Department of Pharmacology and Brain Korea 21 Project for Medical Sciences, Yonsei University College of Medicine, 50–1 Yonsei-ro, Seodaemun-gu, Seoul 120–752, Republic of Korea. E-mail: hygee@yuhs.ac Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Improving Clinical Outcomes in Cochlear Implantation Using Glucocorticoid Therapy: A Review
Cochlear implant surgery is a successful procedure for auditory rehabilitation of patients with severe to profound hearing loss. However, cochlear implantation may lead to damage to the inner ear, which decreases residual hearing and alters vestibular function. It is now of increasing interest to preserve residual hearing during this surgery because this is related to better speech, music perception, and hearing in complex listening environments. Thus, different efforts have been tried to reduce cochlear implantation-related injury, including periprocedural glucocorticoids because of their anti-inflammatory properties. Different routes of administration have been tried to deliver glucocorticoids. However, several drawbacks still remain, including their systemic side effects, unknown pharmacokinetic profiles, and complex delivery methods. In the present review, we discuss the role of periprocedural glucocorticoid therapy to decrease cochlear implantation-related injury, thus preserving inner ear function after surgery. Moreover, we highlight the pharmacokinetic evidence and clinical outcomes which would sustain further interventions. ACKNOWLEDGMENTS: The authors have no conflicts of interest to disclose. Received October 8, 2018; accepted March 14, 2019. Address for correspondence: Cecilia Engmér Berglin, Department of Otorhinolaryngology, B53, Karolinska University Hospital, 141 86 Stockholm, Sweden. E-mail: cecilia.engmer-berglin@sll.se Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
The Effect of Hearing-Protection Devices on Auditory Situational Awareness and Listening Effort
Objectives: Hearing-protection devices (HPDs) are made available, and often are required, for industrial use as well as military training exercises and operational duties. However, these devices often are disliked, and consequently not worn, in part because they compromise situational awareness through reduced sound detection and localization performance as well as degraded speech intelligibility. In this study, we carried out a series of tests, involving normal-hearing subjects and multiple background-noise conditions, designed to evaluate the performance of four HPDs in terms of their modifications of auditory-detection thresholds, sound-localization accuracy, and speech intelligibility. In addition, we assessed their impact on listening effort to understand how the additional effort required to perceive and process auditory signals while wearing an HPD reduces available cognitive resources for other tasks. Design: Thirteen normal-hearing subjects participated in a protocol, which included auditory tasks designed to measure detection and localization performance, speech intelligibility, and cognitive load. Each participant repeated the battery of tests with unoccluded ears and four hearing protectors, two active (electronic) and two passive. The tasks were performed both in quiet and in background noise. Results: Our findings indicate that, in variable degrees, all of the tested HPDs induce performance degradation on most of the conducted tasks as compared to the open ear. Of particular note in this study is the finding of increased cognitive load or listening effort, as measured by visual reaction time, for some hearing protectors during a dual-task, which added working-memory demands to the speech-intelligibility task. Conclusions: These results indicate that situational awareness can vary greatly across the spectrum of HPDs, and that listening effort is another aspect of performance that should be considered in future studies. The increased listening effort induced by hearing protectors may lead to earlier cognitive fatigue in noisy environments. Further study is required to characterize how auditory performance is limited by the combination of hearing impairment and the use of HPDs, and how the effects of such limitations can be linked to safe and effective use of hearing protection to maximize job performance. ACKNOWLEDGMENTS: This work is sponsored by the US Army Natick Soldier Research, Development, and Engineering Center under Air Force Contract FA8721-05-C-0002 and/or FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Department of the Army. Distribution Statement A: Approved for public release. Distribution is unlimited. C.J.S. designed and performed experiments, analyzed data, provided statistical analysis and wrote the article; P.T.C provided data analysis and wrote the article; A.P.D, J.P.P., T.P., and J.B. collected and analyzed data; T.F.Q. and M.M. provided contributions to conception of the work and critical editing; P.P.C provided editing and final approval of the version to be published. The authors have no conflicts of interest to disclose. Received June 4, 2018; accepted February 21, 2019. Address for correspondence: Bioengineering Systems and Technologies Group, MIT Lincoln Laboratory, 244 Wood St. Lexington, MA 02421, USA. E-mail: christopher.smalt@ll.mit.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Neural Indices of Vowel Discrimination in Monolingual and Bilingual Infants and Children
Objectives: To examine maturation of neural discriminative responses to an English vowel contrast from infancy to 4 years of age and to determine how biological factors (age and sex) and an experiential factor (amount of Spanish versus English input) modulate neural discrimination of speech. Design: Event-related potential (ERP) mismatch responses (MMRs) were used as indices of discrimination of the American English vowels [ε] versus [I] in infants and children between 3 months and 47 months of age. A total of 168 longitudinal and cross-sectional data sets were collected from 98 children (Bilingual Spanish–English: 47 male and 31 female sessions; Monolingual English: 48 male and 42 female sessions). Language exposure and other language measures were collected. ERP responses were examined in an early time window (160 to 360 msec, early MMR [eMMR]) and late time window (400 to 600 msec, late MMR). Results: The eMMR became more negative with increasing age. Language experience and sex also influenced the amplitude of the eMMR. Specifically, bilingual children, especially bilingual females, showed more negative eMMR compared with monolingual children and with males. However, the subset of bilingual children with more exposure to English than Spanish compared with those with more exposure to Spanish than English (as reported by caretakers) showed similar amplitude of the eMMR to their monolingual peers. Age was the only factor that influenced the amplitude of the late MMR. More negative late MMR was observed in older children with no difference found between bilingual and monolingual groups. Conclusions: Consistent with previous studies, our findings revealed that biological factors (age and sex) and language experience modulated the amplitude of the eMMR in young children. The early negative MMR is likely to be the mismatch negativity found in older children and adults. In contrast, the late MMR amplitude was influenced only by age and may be equivalent to the Nc in infants and to the late negativity observed in some auditory passive oddball designs. ACKNOWLEDGMENTS: The authors thank A. Barias and M. Wroblewski for helping with data collection, B. Tagliaferri for technical support, and W. Strange and R. G. Schwartz for advice on the design. This research was supported by NIH HD46193 to V. L. Shafer. V. L. S. oversaw the project, designed the experiments, and was involved in writing the article; Y. H. Y. helped with data collection, performed data analyses, and wrote wrote the initial draft in conjunction with V. L. S., and led the manuscript revision process;. C. T. helped with data collection and interpreting the language measures; H.H. and L. C. performed the early stages of the Mixed-Effect Modeling analysis in conjunction with Y. H. Y.; N. V. helped design the language background questionnaire and collect the data; J. G. helped collect the data; K. G. and H. D. helped design and pilot the electrophysiological paradigm and helped collect the data. All authors were involved in revising the article. The authors have no conflicts of interest to disclose. Received May 10, 2018; accepted January 24, 2019. Address for correspondence: Yan H. Yu, Department of Communication Sciences and Disorders, St. John's University, 8000 Utopia Parkway, Queens, NY 11437, USA. E-mail: yuy1@stjohns.edu and Valerie L. Shafer, Ph.D. Program in Speech-Language-Hearing Sciences, The Graduate Center, City University of New York, 365 Fifth Avenue, New York, NY 10016, USA. E-mail: vshafer@gc.cuny.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Impact of Lexical Parameters and Audibility on the Recognition of the Freiburg Monosyllabic Speech Test
Objective: Correct word recognition is generally determined by audibility, but lexical parameters also play a role. The focus of this study was to examine both the impact of audibility and lexical parameters on speech recognition of test words of the clinical German Freiburg monosyllabic speech test, and subsequently on the perceptual imbalance of test lists observed in the literature. Design: For 160 participants with normal hearing that were divided into three groups with different simulated hearing thresholds, monaural speech recognition for the Freiburg monosyllabic speech test was obtained via headphones in quiet at different presentation levels. A software manipulated the original speech material to simulate two different hearing thresholds. All monosyllables were classified according to their frequency of occurrence in contemporary language and the number of lexical neighbors using the Cross-Linguistic Easy-Access Resource for Phonological and Orthographic Neighborhood Density database. Generalized linear mixed-effects regression models were used to evaluate the influences of audibility in terms of the Speech Intelligibility Index and lexical properties of the monosyllables in terms of word frequency (WF) and neighborhood density (ND) on the observed speech recognition per word and per test list, respectively. Results: Audibility and interactions of audibility with WF and ND correctly predicted identification of the individual monosyllables. Test list recognition was predicted by test list choice, audibility, and ND, as well as by interactions of WF and test list, audibility and ND, ND and test list, and audibility per test list. Conclusions: Observed differences in speech recognition of the Freiburg monosyllabic speech test, which are well reported in the literature, depend not only on audibility but also on WF, neighborhood density, and test list choice and their interactions. The authors conclude that future creations of speech test material should take these lexical parameters into account. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal's Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Sascha Bilert, Tina Gebauer, Lena Haverkamp, Britta Jensen, and Kristin Sprenger for their support performing the measurements and categorizing the monosyllables per database. The authors also thank Daniel Berg for technical support and Thomas Brand for support on the SII predictions. English language support was provided by www.stels-ol.de. This work was supported by the Ph.D. program Jade2Pro of Jade University of Applied Sciences, Oldenburg, Germany. The authors have no conflicts of interest to disclose. Received October 14, 2017; accepted March 8, 2019. Address for correspondence: Alexandra Winkler, Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Ofener Straße 16/19, D-26121 Oldenburg, Germany. E-mail: alexandra.winkler@jade-hs.de Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Age-Related Changes in Temporal Resolution Revisited: Electrophysiological and Behavioral Findings From Cochlear Implant Users
Objectives: The mechanisms underlying age-related changes in speech perception are still unclear, most likely multifactorial and often can be difficult to parse out from the effects of hearing loss. Age-related changes in temporal resolution (i.e., the ability to track rapid changes in sounds) have long been associated with speech perception declines exhibited by many older individuals. The goals of this study were as follows: (1) to assess age-related changes in temporal resolution in cochlear implant (CI) users, and (2) to examine the impact of changes in temporal resolution and cognition on the perception of speech in noise. In this population, it is possible to bypass the cochlea and stimulate the auditory nerve directly in a noninvasive way. Additionally, CI technology allows for manipulation of the temporal properties of a signal without changing its spectrum. Design: Twenty postlingually deafened Nucleus CI users took part in this study. They were divided into groups of younger (18 to 40 years) and older (68 to 82 years) participants. A cross-sectional study design was used. The speech processor was bypassed and a mid-array electrode was used for stimulation. We compared peripheral and central physiologic measures of temporal resolution with perceptual measures obtained using similar stimuli. Peripherally, temporal resolution was assessed with measures of the rate of recovery of the electrically evoked compound action potential (ECAP), evoked using a single pulse and a pulse train as maskers. The acoustic change complex (ACC) to gaps in pulse trains was used to assess temporal resolution more centrally. Psychophysical gap detection thresholds were also obtained. Cognitive assessment included two tests of processing speed (Symbol Search and Coding) and one test of working memory (Digit Span Test). Speech perception was tested in the presence of background noise (QuickSIN test). A correlational design was used to explore the relationship between temporal resolution, cognition, and speech perception. Results: The only metric that showed significant age effects in temporal processing was the ECAP recovery function recorded using pulse train maskers. Younger participants were found to have faster rates of neural recovery following presentation of pulse trains than older participants. Age was not found to have a significant effect on speech perception. When results from both groups were combined, digit span was the only measure significantly correlated with speech perception performance. Conclusions: In this sample of CI users, few effects of advancing age on temporal resolution were evident. While this finding would be consistent with a general lack of aging effects on temporal resolution, it is also possible that aging effects are influenced by processing peripheral to the auditory nerve, which is bypassed by the CI. However, it is known that cross-fiber neural synchrony is improved with electrical (as opposed to acoustic) stimulation. This change in neural synchrony may, in turn, make temporal cues more robust/perceptible to all CI users. Future studies involving larger sample sizes should be conducted to confirm these findings. Results of this study also add to the growing body of literature that suggests that working memory is important for the perception of degraded speech. ACKNOWLEDGMENTS: We thank Paul Abbas for helpful suggestions on study design and data analysis, and Jacob Oleson for assistance with statistical analyses. We also acknowledge Wenjun Wang for help in developing the perception testing software. This study was funded by a Student Investigator Research Grant from the American Academy of Audiology (B. S. M.) and by an NIH P50 DC000242 grant. The authors have no conflicts of interest to disclose. Received June 21, 2017; accepted February 21, 2019. Address for correspondence: Bruna S. S. Mussoi, AuD, PhD, Kent State University, Speech Pathology and Audiology, A140 Center for Performing Arts, 1325 Theatre Drive, Kent, OH 44242, USA. E-mail: bmussoi@kent.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.


Biochimica et Biophysica Acta
Tailoring the CRISPR system to transactivate coagulation gene promoters in normal and mutated contexts
Publication date: June 2019
Source: Biochimica et Biophysica Acta (BBA) - Gene Regulatory Mechanisms, Volume 1862, Issue 6
Author(s): Silvia Pignani, Federico Zappaterra, Elena Barbon, Antonia Follenzi, Matteo Bovolenta, Francesco Bernardi, Alessio Branchini, Mirko Pinotti
Abstract
Engineered transcription factors (TF) have expanded our ability to modulate gene expression and hold great promise as bio-therapeutics. The first-generation TF, based on Zinc Fingers or Transcription-Activator-like Effectors (TALE), required complex and time-consuming assembly protocols, and were indeed replaced in recent years by the CRISPR activation (CRISPRa) technology. Here, with coagulation F7/F8 gene promoters as models, we exploited a CRISPRa system based on deactivated (d)Cas9, fused with a transcriptional activator (VPR), which is driven to its target by a single guide (sg)RNA.
Reporter gene assays in hepatoma cells identified a sgRNA (sgRNAF7.5) triggering a ~35-fold increase in the activity of F7 promoter, either wild-type, or defective due to the c.-61T>G mutation. The effect was higher (~15-fold) than that of an engineered TALE-TF (TF4) targeting the same promoter region. Noticeably, when challenged on the endogenous F7 gene, the dCas9-VPR/sgRNAF7.5 combination was more efficient (~6.5-fold) in promoting factor VII (FVII) protein secretion/activity than TF4 (~3.8-fold). The approach was translated to the promoter of F8, whose reduced expression causes hemophilia A. Reporter gene assays in hepatic and endothelial cells identified sgRNAs that, respectively, appreciably increased F8 promoter activity (sgRNAF8.1, ~8-fold and 3-fold; sgRNAF8.2, ~19-fold and 2-fold) with synergistic effects (~38-fold and 2.7-fold). Since modest increases in F7/F8 expression would ameliorate patients' phenotype, the CRISPRa-mediated transactivation extent might approach the low therapeutic threshold.
Through this pioneer study we demonstrated that the CRISPRa system is easily tailorable to increase expression, or rescue disease-causing mutations, of different promoters, with potential intriguing implications for human disease models.

A non-autonomous role of MKL1 in the activation of hepatic stellate cells
Publication date: June 2019
Source: Biochimica et Biophysica Acta (BBA) - Gene Regulatory Mechanisms, Volume 1862, Issue 6
Author(s): Zilong Li, Ping Li, Yunjie Lu, Donglin Sun, Xiaoying Zhang, Yong Xu
Abstract
Although hepatic stellate cells (HSC) represent the major source of fibrogenesis in the liver under various pathological conditions, other cell types including hepatic parenchymal cells (hepatocytes) also contribute to HSC activation and hence liver fibrosis. The underlying mechanism, however, is poorly defined. Here we report that hepatocytes exposed to high concentrations of glucose (HG) emit a pro-fibrogenic cue as evidenced by the observation that primary HSCs cultured in conditioned media (CM) collected from hepatocytes exposed to HG up-regulated the production of extracellular matrix (ECM) proteins compared to CM collected from hepatocytes exposed to low glucose. We further identified the pro-fibrogenic cue from hepatocytes to be connective tissue growth factor (CTGF) because either depletion of endogenous CTGF in hepatocytes with siRNA or the addition of a CTGF-specific neutralizing antibody to the CM blunted the pro-fibrogenic effect elicit by HG treatment. Of interest, we discovered that genetic ablation or pharmaceutical inhibition of the transcriptional modulator MKL1 in hepatocytes also abrogated the HG-induced pro-fibrogenic effects. Mechanistically, MKL1 interacted with AP-1 and SMAD3 to trans-activate CTGF in hepatocytes in response to HG treatment. In conclusion, our data suggest that MKL1 contribute to HSC activation in a non-autonomous fashion by promoting CTGF transcription in hepatocytes.

Enhancer long-range contacts: The multi-adaptor protein LDB1 is the tie that binds
Publication date: June 2019
Source: Biochimica et Biophysica Acta (BBA) - Gene Regulatory Mechanisms, Volume 1862, Issue 6
Author(s): Guoyou Liu, Ann Dean
Abstract
The eukaryotic genome is organized at varying levels into chromosome territories, transcriptional compartments and topologically associating domains (TADs), which are architectural features largely shared between different cell types and across species. In contrast, within TADs, chromatin loops connect enhancers and their target genes to establish unique transcriptomes that distinguish cells and tissues from each other and underlie development and differentiation. How these tissue-specific and temporal stage-specific long-range contacts are formed and maintained is a fundamental question in biology. The widely expressed Lim domain binding 1protein, LDB1, plays a critical role in connecting enhancers and genes by forming complexes with cell-type specificity across diverse developmental pathways including neurogenesis, cardiogenesis, retinogenesis and hematopoiesis. Here we review the multiple roles of LDB1 in cell fate determination and in chromatin loop formation, with an emphasis on mammalian systems, to illuminate how LDB1 functions in normal cells and in diseases such as cancer.

A novel role of U1 snRNP: Splice site selection from a distance
Publication date: June 2019
Source: Biochimica et Biophysica Acta (BBA) - Gene Regulatory Mechanisms, Volume 1862, Issue 6
Author(s): Ravindra N. Singh, Natalia N. Singh
Abstract
Removal of introns by pre-mRNA splicing is fundamental to gene function in eukaryotes. However, understanding the mechanism by which exon-intron boundaries are defined remains a challenging endeavor. Published reports support that the recruitment of U1 snRNP at the 5′ss marked by GU dinucleotides defines the 5′ss as well as facilitates 3′ss recognition through cross-exon interactions. However, exceptions to this rule exist as U1 snRNP recruited away from the 5′ss retains the capability to define the splice site, where the cleavage takes place. Independent reports employing exon 7 of Survival Motor Neuron (SMN) genes suggest a long-distance effect of U1 snRNP on splice site selection upon U1 snRNP recruitment at target sequences with or without GU dinucleotides. These findings underscore that sequences distinct from the 5′ss may also impact exon definition if U1 snRNP is recruited to them through partial complementarity with the U1 snRNA. In this review we discuss the expanded role of U1 snRNP in splice-site selection due to U1 ability to be recruited at more sites than predicted solely based on GU dinucleotides.

Circular exonic RNAs: when RNA structure meets topology
Publication date: Available online 15 May 2019
Source: Biochimica et Biophysica Acta (BBA) - Gene Regulatory Mechanisms
Author(s): Dmitri D. Pervouchine
Abstract
Although RNA circularization was first documented in the 1990s, the extent to which it occurs was not known until recent advances in high-throughput sequencing enabled the widespread identification of circular RNAs (circRNAs). Despite this, many aspects of circRNA biogenesis, structure, and function yet remain obscure. This review focuses on circular exonic RNAs, a subclass of circRNAs that are generated through backsplicing. Here, I hypothesize that RNA secondary structure can be the common factor that promotes both exon skipping and spliceosomal RNA circularization, and that backsplicing of double-stranded regions could generate topologically linked circRNA molecules. CircRNAs manifest themselves by the presence of tail-to-head exon junctions, which were previously attributed to post-transcriptional exon permutation and repetition. I revisit these observations and argue that backsplicing does not automatically imply RNA circularization because tail-to-head exon junctions give only local information about transcript architecture and, therefore, they are in principle insufficient to determine globally circular topology.

DISC1 promotes translation maintenance during sodium arsenite-induced oxidative stress
Publication date: Available online 7 May 2019
Source: Biochimica et Biophysica Acta (BBA) - Gene Regulatory Mechanisms
Author(s): Francisco Fuentes-Villalobos, Carlos Farkas, Sebastián Riquelme-Barrios, Marisol E. Armijo, Ricardo Soto-Rifo, Roxana Pincheira, Ariel F. Castro
Abstract
Variation in Disrupted-in-Schizophrenia 1 (DISC1) increases the risk for neurodegenerative diseases, schizophrenia, and other mental disorders. However, the functions of DISC1 associated with the development of these diseases remain unclear. DISC1 has been reported to inhibit Akt/mTORC1 signaling, a major regulator of translation, and recent studies indicate that DISC1 could exert a direct role in translational regulation. Here, we present evidence of a novel role of DISC1 in the maintenance of protein synthesis during oxidative stress. In order to investigate DISC1 function independently of Akt/mTORC1, we used Tsc2−/− cells, where mTORC1 activation is independent of Akt. DISC1 knockdown enhanced inhibition of protein synthesis in cells treated with sodium arsenite (SA), an oxidative agent used for studying stress granules (SGs) dynamics and translational control. N-acetyl-cysteine inhibited the effect of DISC1, suggesting that DISC1 affects translation in response to oxidative stress. DISC1 decreased SGs number in SA-treated cells, but resided outside SGs and maintained protein synthesis independently of a proper SG nucleation. DISC1-dependent stimulation of translation in SA-treated cells was supported by its interaction with eIF3h, a component of the canonical translation initiation machinery. Consistent with a role in the homeostatic maintenance of translation, DISC1 knockdown or overexpression decreased cell viability after SA exposure. Our data suggest that DISC1 is a relevant component of the cellular response to stress, maintaining certain levels of translation and preserving cell integrity. This novel function of DISC1 might be involved in its association with pathologies affecting tissues frequently exposed to oxidative stress.

PTBP1 enhances exon11a skipping in Mena pre-mRNA to promote migration and invasion in lung carcinoma cells
Publication date: Available online 7 May 2019
Source: Biochimica et Biophysica Acta (BBA) - Gene Regulatory Mechanisms
Author(s): Shuaiguang Li, Lianghua Shen, Luyuan Huang, Sijia Lei, Xingdong Cai, Mason Breitzig, Bin Zhang, Annan Yang, Wenzuo Ji, Meiyan Huang, Qing Zheng, Hanxiao Sun, Feng Wang
Abstract
Alternative splicing (AS) events occur in the majority of human genes. AS in a single gene can give rise to different functions among multiple isoforms. Human ortholog of mammalian enabled (Mena) is a conserved regulator of actin dynamics that plays an important role in metastasis. Mena has been shown to have multiple splice variants in human tumor cells due to AS. However, the mechanism mediated Mena AS has not been elucidated. Here we showed that polypyrimidine tract-binding protein 1 (PTBP1) could modulate Mena AS. First, PTBP1 levels were elevated in metastatic lung cancer cells as well as during epithelial-mesenchymal transition (EMT) process. Then, knockdown of PTBP1 using shRNA inhibited migration and invasion of lung carcinoma cells and decreased the Mena exon11a skipping, whereas overexpression of PTBP1 had the opposite effects. The results of RNA pull-down assays and mutation analyses demonstrated that PTBP1 functionally targeted and physically interacted with polypyrimidine sequences on both upstream intron11 (TTTTCCCCTT) and downstream intron11a (TTTTTTTTTCTTT). In addition, the results of migration and invasion assays as well as detection of filopodia revealed that the effect of PTBP1 was reversed by knockdown of Mena but not Mena11a+. Overexpressed MenaΔ11a also rescued the PTBP1-induced migration and invasion. Taken together, our study provides a novel mechanism that PTBP1 modulates Mena exon11a skipping, and indicates that PTBP1 depends on the level of Mena11a− to promote lung cancer cells migration and invasion. The regulation of Mena AS may be a potential prognostic marker and a promising target for treatment of lung carcinoma.

HSF1 phosphorylation by cyclosporin A confers hyperthermia sensitivity through suppression of HSP expression
Publication date: Available online 3 May 2019
Source: Biochimica et Biophysica Acta (BBA) - Gene Regulatory Mechanisms
Author(s): Jingyu Shao, Beibei Han, Pengxiu Cao, Bingwei Zhang, Ming Liu, Danyu Li, Nan Zhou, Qiang Hao, Xianglin Duan, Yanzhong Chang, Akira Nakai, Yumei Fan, Ke Tan
Abstract
Heat shock leads to the activation of heat shock factor 1 (HSF1) and up-regulation of a number of heat shock proteins (HSPs). Cyclosporin A (CsA) is an immunosuppressant that has revolutionized organ transplantation in clinical medicine. However, the roles and regulatory mechanisms of CsA on the HSP expression remain largely unknown. Here, we found that CsA pretreatment prevented the induction of HSPs during heat shock by enhancing the phosphorylation of Ser303 and Ser307 in HSF1 which inhibited HSF1 transcriptional activity. Inhibition of ERK1/2, GSK3β and CK2 ameliorated CsA-induced down-regulation of HSP expressions and up-regulation of HSF1 phosphorylation. CsA impeded HSF1-SSBP1 complex formation, HSF1 nuclear translocation and recruitment to the HSP70 promoter. Due to the low expression of HSPs, CsA treatment clearly caused cell death during proteotoxic stresses. These results indicated that CsA suppressed the induction of HSPs during heat shock through regulation of the phosphorylation and nuclear translocation of HSF1. Our study could provide a conceptual framework for the development of novel strategies for combination therapy utilizing hyperthermia or chemotherapy and CsA treatment.

Editorial Board
Publication date: May 2019
Source: Biochimica et Biophysica Acta (BBA) - Gene Regulatory Mechanisms, Volume 1862, Issue 5
Author(s):

MiR-15/16 mediate crosstalk between the MAPK and Wnt/β-catenin pathways during hepatocyte differentiation from amniotic epithelial cells
Publication date: May 2019
Source: Biochimica et Biophysica Acta (BBA) - Gene Regulatory Mechanisms, Volume 1862, Issue 5
Author(s): Chunyu Bai, Hongwei Zhang, Xiangyang Zhang, Wancai Yang, Xiangchen Li, Yuhua Gao
Abstract
MiR-15/16 play an important role in liver development and hepatocyte differentiation, but the mechanisms by which these miRNAs regulate their targets and downstream genes to influence cell fate are poorly understood. In this study, we showed up-regulation of miR-15/16 during HGF- and FGF4-induced hepatocyte differentiation from amniotic epithelial cells (AECs). To elucidate the role of miR-15/16 and their targets in hepatocyte differentiation, we investigated the roles of miR-15/16 in both the MAPK and Wnt/β-catenin pathways, which were predicted to be involved in miR-15/16 signaling. Our results demonstrated that the transcription of miR-15/16 was enhanced by c-Fos, c-Jun, and CREB, important elements of the MAPK pathway, and miR-15/16 in turn directly targeted adenomatous polyposis coli (APC) protein, a major member of the β-catenin degradation complex. MiR-15/16 destroyed these degradation complexes to activate β-catenin, and the activated β-catenin combined with LEF/TCF7L1 to form a transcriptional complex that enhanced transcription of hepatocyte nuclear factor 4 alpha (HNF4α). HNF4α also bound the promoter region of miR-15/16 and promoted its transcription, thereby forming a regulatory circuit to promote the differentiation of AECs into hepatocytes. Endogenous miRNAs are, therefore, involved in hepatocyte differentiation from AECs and should be considered during the development of an effective hepatocyte transplant therapy for liver damage.



Allergy and Clinical Immunology
New phenotypes in hypersensitivity reactions to nonsteroidal anti-inflammatory drugs
Purpose of review Nonsteroidal anti-inflammatory drug (NSAID) is one of the most frequently prescribed medications in the medical field, and hypersensitivity to NSAID is a common adverse drug reaction encountered. However, NSAID hypersensitivity presents a variety of symptoms caused by diverse pharmacological and immunological mechanisms. Recent findings Owing to the heterogeneity of the disease, a new concept for the classification of NSAID hypersensitivity has recently been proposed to diagnose and manage NSAID hypersensitivity for personalized treatment. Acute and delayed reactions were distinguished in this classification, and identification of symptoms and speculation of putative mechanisms help physicians make the right diagnosis. NSAID-exacerbated respiratory disease is a noticeable phenotype of NSAID hypersensitivity that involves upper airway comorbidities (chronic rhinosinusitis with nasal polyps) as well as asthmatic features. The cutaneous phenotypes of NSAID hypersensitivity occur, and cross-reactivity with other types of NSAID should be considered in establishing a proper diagnosis. Hypersensitivity to a single NSAID can present urticaria/angioedema and anaphylaxis, in which an IgE-mediated immune response is suggested to be a prime mechanism. Management of NSAID hypersensitivity reactions includes avoidance, pharmacological treatment following standard guidelines, and aspirin desensitization. Summary The classification, diagnosis, and management of NSAID hypersensitivity should be individually reached by identifying its phenotype. Correspondence to Hae-Sim Park, Department of Allergy and Clinical Immunology, Ajou University School of Medicine, 164 Worldcup-ro, Yeongtong-gu, Suwon 16499, Korea. Tel: +82 31 219 5150; fax: +82 31 219 5154; e-mail: hspark@ajou.ac.kr Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Microbiome and skin biology
Purpose of review The skin is home to a diverse milieu of bacteria, fungi, viruses, bacteriophages, and archaeal communities. The application of culture-independent approaches has revolutionized the characterization of the skin microbiome and have revealed a previously underappreciated phylogenetic and functional granularity of skin-associated microbes in both health and disease states. Recent findings The physiology of a given skin-niche drives the site-specific differences in bacterial phyla composition of healthy skin. Changes in the skin microbiome have consistently been associated with atopic dermatitis. In particular, Staphylococcus aureus overgrowth with concomitant decline in Staphylococcus epidermidis is a general feature associated with atopic dermatitis and is not restricted to eczematous lesions. Changes in fungal species are now also being described. Changes in the composition and metabolic activity of the gut microbiota are associated with skin health. Summary We are now beginning to appreciate the intimate and intricate interactions between microbes and skin health. Multiple studies are currently focused on the manipulation of the skin or gut microbiome to explore their therapeutic potential in the prevention and treatment of skin inflammation. Correspondence to Liam O'Mahony, Office 450, 4th Floor Food Science and Technology Building, University College Cork, Cork, Ireland. Tel.: +353 21 4901316;. e-mail: liam.omahony@ucc.ie Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Recombinant allergens for immunotherapy: state of the art
Purpose of review More than 30 years ago, the first molecular structures of allergens were elucidated and defined recombinant allergens became available. We review the state of the art regarding molecular AIT with the goal to understand why progress in this field has been slow, although there is huge potential for treatment and allergen-specific prevention. Recent findings On the basis of allergen structures, several AIT strategies have been developed and were advanced into clinical evaluation. In clinical AIT trials, promising results were obtained with recombinant and synthetic allergen derivatives inducing allergen-specific IgG antibodies, which interfered with allergen recognition by IgE whereas clinical efficacy could not yet be demonstrated for approaches targeting only allergen-specific T-cell responses. Available data suggest that molecular AIT strategies have many advantages over allergen extract-based AIT. Summary Clinical studies indicate that recombinant allergen-based AIT vaccines, which are superior to existing allergen extract-based AIT can be developed for respiratory, food and venom allergy. Allergen-specific preventive strategies based on recombinant allergen-based vaccine approaches and induction of T-cell tolerance are on the horizon and hold promise that allergy can be prevented. However, progress is limited by lack of resources needed for clinical studies, which are necessary for the development of these innovative strategies. Correspondence to Rudolf Valenta, MD, Division of Immunopathology, Department of Pathophysiology and Allergy Research, Center for Pathophysiology, Infectiology and Immunology, Medical University of Vienna, Währinger Gürtel 18-20, A-1090 Vienna, Austria. Tel: +43 140400 51080; e-mail: Rudolf.valenta@meduniwien.ac.at Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Use of biologics in chronic sinusitis with nasal polyps
Purpose of review Chronic rhinosinusitis with nasal polyps (CRSwNP) is a heterogeneous inflammatory condition with different endotypes between patients from eastern or western countries. Targeted biologics are currently used to treat CRSwNP, but the outcomes widely vary. This review focuses on the present use of biologics for treating CRSwNP. Recent findings Monoclonal biologics have been used as an innovative therapy for multiple allergic diseases and comorbid allergic conditions. Over the past several decades, numerous biomarkers have been investigated and were found to be closely correlated with CRSwNP, improving the understanding of inflammatory patterns and endotype classifications for CRSwNP and prompting discussion regarding the use of biologics in CRSwNP. Efficacies vary in reports of different research groups, but it has been found that patients with TH-2-driven inflammatory patterns respond better to the use of biologics than those with non-TH-2-driven CRSwNP. These findings suggest the importance and urgency of developing criteria for biologics in CRSwNP. Summary Precisely determining patient criteria, identifying treatment biomarkers based on endotyping for CRSwNP and determinations of contraindications for long-term utilization may be useful for optimizing treatment strategies and improving the therapeutic efficacy of biologics to achieve long-term control starting at early stages. Correspondence to Luo Zhang, MD, PhD, Beijing TongRen Hospital, Capital Medical University, No. 1, DongJiaoMinXiang, DongCheng District, Beijing, China. Tel: +86 13910830399; e-mail: dr.luozhang@139.com Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Therapeutic approach of anaphylaxis
Purpose of review Anaphylaxis is a recognized cause of death in all ages, which requires prompt recognition and treatment. We here propose to review the current and new pharmacological treatment of anaphylaxis in the view of the new knowledge in the field that can support the quality practice and empower allergists and health professionals with new tools that can be used to treat symptoms and prevent anaphylaxis. Recent findings The recent description of phenotypes provides new insight and understanding into the mechanisms and causes of anaphylaxis through a better understanding of endotypes and application of precision medicine. Several biologic therapies and new devices are emerging as potential preventive treatment for anaphylaxis. Summary Adrenaline (epinephrine) is still the first-line treatment for any type of anaphylaxis and is recognized as the only medication documented to prevent hospitalizations, hypoxic sequelae and fatalities. β2-adrenergic agonists and glucagon remains as the second-line treatment of anaphylaxis, meanwhile glucocorticoids and antihistamines should be used only as third-line treatment. Their administration should never delay adrenaline injection in anaphylaxis. More intuitive adrenaline autoinjectors design and features are required as well as a worldwide availability of adrenaline autoinjectors. Biological drugs, such as omalizumab, have been used as therapeutic adjuvants as a preventive treatment of anaphylaxis, but cost-effectiveness should be considered individually. Understanding the specifications of underlying mechanisms can potentially support improvements in the patients' allergological work-up and open the opportunity of developments of potential new drugs, such as biological agents. Expanding knowledge with regard to the presentation, causes, and triggers for anaphylaxis among healthcare providers will improve its diagnosis and management, increase patient safety, and decrease morbidity and mortality. Correspondence to Luciana Kase Tanno, MD, PhD, Department of Pulmonology, Division of Allergy, Hôpital Arnaud de Villeneuve, University Hospital of Montpellier, 371, av. du Doyen Gaston Giraud 34295, Montpellier Cedex 5, France. Tel: +33 467336107; fax: +33 467633645; e-mail: luciana.tanno@gmail.com Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
Eosinophilic esophagitis during sublingual and oral allergen immunotherapy
Purpose of review The aim of this review is to discuss the current evidence regarding the development of eosinophilic esophagitis (EoE) in individuals undergoing oral and sublingual immunotherapy (SLIT) for both food and environmental allergens. Cumulative incidence of EoE in patients on allergen immunotherapy for peanut, milk, and egg is estimated. Recent findings De novo development of EoE in patients undergoing oral and SLIT has been demonstrated on the scale of case reports and prospective randomized trials. However, few individuals with EoE-like symptoms during immunotherapy undergo endoscopy, and the long-term outcomes of immunotherapy-associated EoE are unknown. Summary Evidence exists to suggest that allergen immunotherapy could place individuals at risk for the development of EoE, the true incidence of which may vary depending on antigen exposure and methods used to define the condition. Correspondence to Jonathan M. Spergel, MD, PhD, Division of Allergy and Immunology, Children's Hospital of Philadelphia, The Wood Building, 3401 Civic Center Blvd., Philadelphia, PA 19104, USA. Tel: +1 215 590 2549; fax: +1 215 590 6849; e-mail: spergel@email.chop.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.


Anaesthesiology
Effect of sevoflurane-based or propofol-based anaesthesia on the incidence of postoperative acute kidney injury: A retrospective propensity score-matched analysis
BACKGROUND Propofol may help to protect against ischaemic acute kidney injury (AKI); however, research on this topic is sparse. OBJECTIVE The current study aimed to investigate whether there were differences in the incidence of postoperative AKI after lung resection surgery between patients who received propofol-based total intravenous anaesthesia (TIVA) and those who received sevoflurane-based inhalational anaesthesia. DESIGN A retrospective observational study. SETTING A single tertiary care hospital. PATIENTS Medical records of patients aged 19 years or older who underwent curative lung resection surgery for nonsmall cell lung cancer between January 2005 and February 2018 were examined. MAIN OUTCOME MEASURES After propensity score matching, the incidence of AKI in the first 3 postoperative days was compared between patients who received propofol and those who received sevoflurane. Logistic regression analyses were also used to investigate whether propofol-based TIVA lowered the risk of postoperative AKI. RESULTS The analysis included 2872 patients (1477 in the sevoflurane group and 1395 in the propofol group). After propensity score matching, 661 patients were included in each group; 24 (3.6%) of the 661 patients in the sevoflurane group developed AKI compared with 23 (3.5%) of the 661 patients in the propofol group (95% confidence intervals of difference in incidence −0.019 to 0.022, P = 0.882). The logistic regression analyses revealed that the incidence of AKI was not different in the two groups (odds ratio 0.96, 95% confidence interval 0.53 to 1.71, P = 0.882). CONCLUSION In this retrospective study, no significant difference was found in the incidence of postoperative AKI after lung resection surgery between patients who received propofol-based TIVA and those who received sevoflurane-based inhalational anaesthesia. Considering the methodological limitation of this retrospective study, further studies are required to confirm these results. Correspondence to Tak Kyu Oh, Department of Anesthesiology and Pain Medicine, Seoul National University Bundang Hospital, 166, Gumi-ro, Bundang-gu, Seongnam-si 13620, Gyeonggi-do, South Korea Tel: +82 31 787 7501; fax: +82 31 787 4063; e-mail: airohtak@hotmail.com Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Website (www.ejanaesthesiology.com). © 2019 European Society of Anaesthesiology
Colour Doppler ultrasound after major cardiac surgery improves diagnostic accuracy of the pulmonary infection score in acute respiratory failure: A prospective observational study
BACKGROUND Postoperative pneumonia is a frequent complication after cardiac surgery, and its diagnosis is difficult. Little is known about the diagnostic accuracy of lung ultrasound (LUS) in the detection of pneumonia in cardiac surgical patients. The substitution of chest radiography by colour Doppler LUS (LUS-sCPIS) in the simplified clinical pulmonary infection score (sCPIS) could improve the diagnosis of pneumonia following cardiac surgery. OBJECTIVE The aim of this study was to compare the diagnostic accuracy of LUS-sCPIS and of sCPIS alone in the detection of postoperative pneumonia after cardiac surgery. DESIGN A prospective study of diagnostic accuracy. SETTING A Surgical Intensive Care Unit of a French University Hospital. PATIENTS Fifty-one patients with acute respiratory failure within 72 h after cardiac surgery were enrolled between January and May 2015. MAIN OUTCOME MEASURE The two index tests, LUS-sCPIS and sCPIS, were calculated for all patients at the onset of acute respiratory failure. The reference standard for the diagnosis of pneumonia was based on the consensus of three physicians, blind to the sCPIS and LUS-sCPIS data, based on a posthoc review of all the clinical, radiological and microbiological evidence. The diagnostic accuracy of LUS-sCPIS was compared with that of sCPIS in the detection of postoperative pneumonia. RESULTS Pneumonia was diagnosed in 26 out of 51 patients. The LUS-sCPIS detected the presence of pneumonia with a sensitivity of 92% (95% CI 0.85 to 0.99) and a specificity of 68% (95% CI 0.55 to 0.81). The sCPIS detected the presence of pneumonia with a sensitivity of 35% (95% CI 0.22 to 0.48) and a specificity of 84% (95% CI 0.74 to 0.94). The area under the curve (AUC) of LUS-sCPIS at 0.80 (95% CI 0.69 to 0.91) was higher than the AUC of sCPIS at 0.59 (95% CI 0.47 to 0.71; P = 0.0008). CONCLUSION Compared with sCPIS, LUS-sCPIS improved diagnostic accuracy in the detection of postoperative pneumonia in patients with acute respiratory failure after cardiac surgery. It could be a useful bedside tool to guide pneumonia management. TRIAL REGISTRATION Clinicaltrials.gov identifier: NCT03279887. https://clinicaltrials.gov/ct2/show/NCT03279887?term=bougl%C3%A9&rank=4 Correspondence to Adrien Bouglé, MD, PhD, Département d'Anesthésie et de Réanimation, Réanimation de Chirurgie Cardiaque, Institut de Cardiologie, Hôpital Universitaire La Pitié-Salpêtrière, 47–83 boulevard de l'Hôpital, Paris 75013, France; e-mail: adrien.bougle@aphp.fr Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Website (www.ejanaesthesiology.com). © 2019 European Society of Anaesthesiology
Reintubation in the ICU following cardiac surgery: is it more difficult than first-time intubation in the operating room?: A prospective, observational study
BACKGROUND After cardiac surgery, a patient's trachea is usually extubated; however, 2 to 13% of cardiac surgery patients require reintubation in the ICU. OBJECTIVE The objective of this study was to compare the initial intubation in the cardiac operating room with reintubation (if required) in the ICU following cardiac surgery. DESIGN A prospective, observational study. SETTING Department of Anesthesiology and Intensive Care Medicine, Clinical Hospital of Santiago, Spain. PATIENTS With approval of the local ethics committee, over a 44-month period, we prospectively enrolled all cardiac surgical patients who were intubated in the operating room using direct laryngoscopy, and who required reintubation later in the ICU. MAIN OUTCOME MEASURES The primary endpoint was to compare first-time success rates for intubation in the operating room and ICU. Secondary endpoints were to compare the technical difficulties of intubation (modified Cormack–Lehane glottic view, operator-reported difficulty of intubation, need for support devices for direct laryngoscopy) and the incidence of complications. RESULTS A total of 122 cardiac surgical patients required reintubation in the ICU. Reintubation was associated with a lower first-time success rate than in the operating room (88.5 vs. 97.6%, P = 0.0048). Reintubation in the ICU was associated with a higher incidence of Cormack–Lehane grades IIb, III or IV views (34.5 vs. 10.7%, P < 0.0001), a higher incidence of moderate or difficult intubation (17.2 vs. 6.5%, P = 0.0001) and a greater need for additional support during direct laryngoscopy (20.5 vs. 10.7%, P = 0.005). Complications were more common during reintubations in the ICU (39.3 vs. 5.7%, P < 0.0001). CONCLUSION Compared with intubations in the operating room, reintubation of cardiac surgical patients in the ICU was associated with more technical difficulties and a higher incidence of complications. CLINICAL TRIAL NUMBER Ethics committee of Galicia number 2015-012. Correspondence to Dr Manuel Taboada, Department of Anesthesiology and Intensive Care Medicine, Servicio de Anestesiología y Reanimación del Hospital Clínico Universitario de Santiago de Compostela, Choupana sn, CP:15706 Santiago de Compostela, A Coruña, España Tel: +00 34 678195618; e-mail: manutabo@yahoo.es © 2019 European Society of Anaesthesiology
Propofol intravenous anaesthesia with desflurane compared with desflurane alone on postoperative liver function after living-donor liver transplantation: A randomised controlled trial
BACKGROUND Propofol is an anaesthetic that resembles α-tocopherol and it has been suggested that it protects against ischaemia-reperfusion injury in liver transplantation. Living-donor liver transplantation (LDLT) presents an opportunity to test this hypothesis in both donors and recipients. OBJECTIVES We compared clinical outcomes after LDLT following anaesthesia with propofol and desflurane against desflurane alone. DESIGN A prospective, randomised, parallel study. SETTING Single-centre trial, study period June 2014 and May 2017. PATIENTS Sixty-two pairs of adult donors and recipients who underwent LDLT. INTERVENTION Patients were randomised to receive either desflurane balanced anaesthesia or propofol total intravenous anaesthesia combined with desflurane anaesthesia. MAIN OUTCOME MEASURES The primary outcome was peak liver transaminase levels during the first 7 days after surgery. Liver function was assessed at 10 different time-points (before surgery, 1 h after reperfusion, upon arrival in the ICU, and daily until postoperative day 7). Creatinine was measured to evaluate the incidence of acute kidney injury. TNF-α, IL-1β, IL-6 and TGF-β1 were assessed in 31 donors after induction, at hepatectomy and at the end of surgery and in 52 recipients after induction, and 1, 3 and 24 h after reperfusion. RESULTS Peak liver transaminase levels were not significantly different between the two groups. Liver function tests and creatinine were also similar between groups at all time-points. There was no difference in the incidence of postoperative complications, including acute kidney injury. With the exception of higher TNF-α in donors of the Propofol group at hepatectomy (0.60 ± 0.29 vs. 1.03 ± 0.53, P = 0.01) cytokine results were comparable between the two groups. CONCLUSION Despite the simultaneous administration of propofol infusion in both donors and recipients, no improvement in laboratory or surgical outcome was observed after LDLT compared with patients who received desflurane anaesthesia alone. TRIAL REGISTRATION NCT02504138 at clinicaltrials.gov. Correspondence to Young C. Yoo, Department of Anesthesiology and Pain Medicine, Severance Hospital, Anesthesia and Pain Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul 03722, South Korea Tel: +82 2 2228 2420; fax: +82 2 2227 7897; e-mail: seaoyster@yuhs.ac © 2019 European Society of Anaesthesiology
Can quantitative sensory tests predict failed back surgery?: A prospective cohort study
BACKGROUND Failed back surgery syndrome (FBSS) is a pain condition refractory to therapy, and is characterised by persistent low back pain after spinal surgery. FBSS is associated with severe disability, low quality of life and high unemployment. We are currently unable to identify patients who are at risk of developing FBSS. Patients with chronic low back pain may display signs of central hypersensitivity as assessed by quantitative sensory tests (QST). This can contribute to the risk of developing persistent pain after surgery. OBJECTIVE We tested the hypothesis that central hypersensitivity as assessed by QST predicts FBSS. DESIGN AND SETTING We performed a prospective cohort study in three tertiary care centres with 141 patients scheduled for up to three segment spinal surgery for chronic low back pain due to degenerative changes. PATIENTS Chronic low back pain was defined as at least 3 on a numerical rating scale on most days during the week and with a minimum duration of 3 months. OUTCOMES We defined FBSS as persistent pain, persistent disability or a composite outcome defined as either persistent pain or disability. The primary outcome was persistent pain 12 months after surgery. We applied 14 QST using electrical, pressure and temperature stimulation to predict FBSS and assessed the association of QST with FBSS in multivariable analyses adjusted for sociodemographic, psychological and clinical and surgery-related characteristics. RESULTS None of the investigated 14 QST predicted FBSS, with 95% confidence intervals of crude and adjusted associations of all QST including one as a measure of no association. Results remained robust in all sensitivity and secondary analyses. CONCLUSION The study indicates that assessment of altered central pain processing using current QST is unlikely to identify patients at risk of FBSS and is therefore unlikely to inform clinical decisions. Correspondence to Prof. Michele Curatolo, MD, PhD, Department of Anesthesiology and Pain Medicine, University of Washington, 1959 NE Pacific Street, Box 356540 Seattle, WA 98195-6540, USA Tel: +1 206 543 2568; fax: +1 206 543 2958; e-mail: curatolo@uw.edu Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Website (www.ejanaesthesiology.com). © 2019 European Society of Anaesthesiology
Tidal volume challenge to predict fluid responsiveness in the operating room: A prospective trial on neurosurgical patients undergoing protective ventilation
BACKGROUND Pulse pressure variation (PPV) and stroke volume variation (SVV) do not predict fluid responsiveness when using a protective ventilation strategy: the use of functional haemodynamic tests can be useful to overcome this limitation. OBJECTIVES We tested the use of a tidal volume challenge (VTC), during 6 ml kg−1 [predicted body weight (PBW)] ventilation, and the end-expiratory occlusion test (EEOT) for prediction of fluid responsiveness. DESIGN An interventional prospective study. SETTING Supine elective neurosurgical patients. INTERVENTIONS The study protocol was, first, the initial EEOT test was performed during baseline 6 ml kg−1 PBW ventilation; second, VTC was performed by increasing the VT up to 8 ml kg−1 PBW and PPV and SVV changes were recorded after 1 min; third, a second EEOT was performed during 8 ml kg−1 PBW ventilation; and VT was reduced back to 6 ml kg−1 PBW and a third EEOT was performed. Finally, a 250 ml fluid challenge was administered over 10 min to identify fluid responders (increase in stroke volume index ≥10%). RESULTS In the 40 patients analysed, PPV and SVV values at baseline and EEOT performed at 6 ml kg−1 PBW did not predict fluid responsiveness. A 13.3% increase in PPV after VTC predicted fluid responsiveness with a sensitivity of 94.7% and a specificity of 76.1%, while a 12.1% increase in SVV after VTC predicted fluid responsiveness with a sensitivity of 78.9% and a specificity of 95.2%. After EEOT performed at 8 ml kg−1 PBW, a 3.6% increase in cardiac index predicted fluid responsiveness with a sensitivity of 89.4% and a specificity of 85.7%, while a 4.7% increase in stroke volume index (SVI) with a sensitivity of 89.4% and a specificity of 85.7%. CONCLUSION The changes in PPV and SVV obtained after VTC are reliable and comparable to the changes in CI and SVI obtained after EEOT performed at 8 ml kg−1 PBW in predicting fluid responsiveness in neurosurgical patients. TRIAL REGISTRATION ACTRN12618000351213. Correspondence to Antonio Messina, MD, PhD, Department of Anaesthesia and Intensive Care Medicine, IRCCS Humanitas, Humanitas University, Via Alessandro Manzoni, 56, Rozzano – Milan 20089, Italy. Tel: +39(0)2 8224 8282; e-mail: mess81rc@gmail.com Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Website (www.ejanaesthesiology.com). © 2019 European Society of Anaesthesiology
Effect of pre-operative oral carbohydrate loading on recovery after day-case cholecystectomy: A randomised controlled trial
BACKGROUND Pre-operative carbohydrate loading has been shown to reduce pre-operative discomfort and postoperative nausea and vomiting in general surgical patients. Few studies have considered day-case surgery. OBJECTIVE The aim of this prospective randomised study was to determine whether pre-operative carbohydrate loading enhanced recovery after day-case cholecystectomy. DESIGN A randomised controlled trial. SETTING Secondary care in a district general and a university hospital in Finland between 2013 and 2016. PATIENTS A total of 113 patients American Society of Anesthesiologists physical status I or II aged 18 to 70 undergoing day-case cholecystectomy were included in the study. Exclusion criteria were bleeding or coagulation disorders, BMI more than 40 kg m−2, dementia, insulin-treated diabetes, migraine, Meniere's disease or a history of alcohol or drug abuse. INTERVENTION The carbohydrate-rich drink group received oral carbohydrate (200 ml) 2 to 3 h before surgery, and the control (fasting) group fasted from midnight according to standard protocol. MAIN OUTCOME MEASURES Visual analogue scales (VAS) were used to score six forms of discomfort: the need for analgesia and antiemetics, the time to drinking, eating and first mobilisation after surgery and the time to discharge. Any hospital re-admission was also recorded. RESULTS The highest VAS scores were seen for mouth dryness and tiredness 2 h after surgery in the fasting group. There were no significant differences in any VAS scores between the study groups. No differences in time to mobilisation, need for pain or antiemetic medication or time to discharge were seen between the groups. CONCLUSION Compared with overnight fasting, pre-operative carbohydrate loading did not significantly enhance peri-operative well being or recovery in patients undergoing day-case cholecystectomy. TRIAL REGISTRATION Clinicaltrials.gov Identifier: NCT03757208. Correspondence to Heli Helminen, Senior Physician, Department of Surgery, Seinäjoki Central Hospital, Hanneksenrinne 7, 60220 Seinäjoki, Finland Tel: +358 64155888; e-mail: heli.helminen@epshp.fi © 2019 European Society of Anaesthesiology
Deep neuromuscular blockade improves surgical conditions during gastric bypass surgery for morbid obesity: A randomised controlled trial
BACKGROUND There is a controversy in the literature whether deep compared with moderate neuromuscular block (NMB) improves surgical conditions for laparoscopic surgery. OBJECTIVES The primary outcome measure was to examine whether switching from moderate to deep NMB improves surgical conditions for laparoscopic surgery in the obese; secondary outcome measures were changes in intra-abdominal pressure, time required to perform the gastrojejunal anastomosis and peri-operative surgical complications. DESIGN A single-centre, randomised controlled study. Each patient was taken as their own control and examined twice: at the first evaluation (E1), all patients had a moderate NMB, thereafter patients were randomised to deep or moderate block and a second evaluation (E2) was performed within 10 min. Patients with excellent rating at E1 were excluded from E2, as their surgical condition could not be further improved. SETTING University Hospital France. PATIENTS Patients undergoing laparoscopic gastric bypass surgery under general anaesthesia were included. Main exclusion criteria were hypersensitivity to the drugs used and absence of written informed consent. INTERVENTIONS According to the group assignment, patients received bolus doses of rocuronium or 0.9% saline. MAIN OUTCOME MEASURES Surgical conditions were assessed with a 4-point rating scale. Intra-operative adverse events were assessed with the Kaafarani-classification and postoperative complications with the Clavien-Dindo classification. RESULTS Eighty-nine patients were initially included and data from 85 could be assessed at E1; surgical rating was excellent in 20, good in 35, acceptable in 18, poor in 12. After excluding those with an excellent rating, the remaining 65 patients were randomly assigned to deep or moderate block. At E2, an improvement of surgical conditions was observed in 29 out of 34 patients with deep block and in four out of 31 with moderate block; P < 0.0001. Poor surgical conditions were more frequently associated with surgical complications (61.5 versus 15.3%; P < 0.001). CONCLUSION Switching from moderate to deep block improves surgical conditions. Poor surgical conditions were associated with a higher incidence of surgical complications. TRIAL REGISTRATION NCT02118844 (www.clinicaltrial.gov). Correspondence to Prof. Thomas Fuchs-Buder, University de Lorraine, CHRU Nancy, Brabois University Hospital, Department of Anesthesiology & Critical Care, 7 allée du Morvan, Vandoeuvre-les-Nancy 54511, France; E-mail: t.fuchs-buder@chru-nancy.fr © 2019 European Society of Anaesthesiology
A randomised controlled pragmatic trial of acupressure therapy on quality of recovery after surgery
BACKGROUND Acupressure therapy is associated with favourable efficacies on postoperative nausea, pain and sleep disturbance, although the quality of the evidence is generally low. No randomised clinical trial has yet assessed the effect of acupressure on postoperative quality of recovery (QoR). OBJECTIVE The objective was to study acupressure efficacy on patient-reported postoperative recovery. DESIGN We conducted a single centre, three-group, blind, randomised controlled, pragmatic trial assessing acupressure therapy on the PC6, LI4 and HT7 acupoints. PATIENTS Postoperative patients expected to stay in hospital at least 2 days after surgery. INTERVENTIONS In the acupressure group, pressure was applied for 6 min (2 min per acupoint), three times a day after surgery for a maximum of 2 postoperative days during the hospital stay. In the sham group, extremely light touch was applied to the acupoints. The third group did not receive any touch. MAIN OUTCOME MEASURES The primary outcome was the change in the QoR, using the QoR-15 questionnaire, between postoperative days 1 and 3. Key secondary outcomes included patients' satisfaction, postoperative nausea and vomiting, pain score and opioid (morphine equivalent) consumption. Assessors for the primary and secondary endpoints were blind to the group allocation. RESULTS Overall, 163 patients were randomised (acupressure n=55, sham n=53, no intervention n=55). The mean (SD) postoperative change in QoR-15 did not differ statistically (P = 0.27) between the acupressure, sham and no intervention groups: 15.2 (17.8), 14.2 (21.9), 9.2 (21.7), respectively. Patient satisfaction (on a 0 to 10 scale) was statistically different (P = 0.01) among these three groups: 9.1 (1.5), 8.4 (1.6) and 8.2 (2.2), respectively. Changes in pain score and morphine equivalent consumption were not significantly different between the groups. CONCLUSION Two days of postoperative acupressure therapy (up to six treatments) did not significantly improve patient QoR, postoperative nausea and vomiting, pain score or opioid consumption. Acupressure, however, was associated with improved patient satisfaction. TRIAL REGISTRATION: ClinicalTrials.gov, identifier: NCT02762435. Correspondence to Eric Noll, MD, PhD, Department of Anesthesiology and Intensive Care, Hôpitaux Universitaires de Strasbourg, Avenue Molière, 67098 Strasbourg, France Tel: +33 3 88127076; fax: +33 3 88127074; e-mail: eric.noll@chru-strasbourg.fr © 2019 European Society of Anaesthesiology
Initial end-tidal carbon dioxide as a predictive factor for return of spontaneous circulation in nonshockable out-of-hospital cardiac arrest patients: A last straw to cling to?
BACKGROUND Early outcome prediction in out-of-hospital cardiac arrest is still a challenge. End-tidal carbon dioxide (ETCO2) has been shown to be a reliable parameter to reflect the quality of cardiopulmonary resuscitation and the chance of return of spontaneous circulation (ROSC). OBJECTIVES This study assessed the validity of early capnography as a predictive factor for ROSC and survival in out-of-hospital cardiac arrest victims with an underlying nonshockable rhythm. DESIGN Retrospective observational study. SETTING/PATIENTS During a 2-year observational period, data from 2223 out-of-hospital cardiac arrest victims within the city of Vienna were analysed. The focus was on the following patients: age more than 18 years, an underlying nonshockable rhythm, and advanced airway management within the first 15 min of advanced life support with subsequent capnography. INTERVENTION No specific intervention was set in this observational study. MAIN OUTCOME MEASURES The first measured ETCO2, assessed immediately after placement of an advanced airway, was used for further analysis. The primary outcome was defined as sustained ROSC, and the secondary outcome was 30-day survival. RESULTS A total of 526 patients met the inclusion criteria. These were stratified into three groups according to initial ETCO2 values (<20, 20 to 45, >45 mmHg). Baseline data and resuscitation factors were similar among all groups. The odds of sustained ROSC and survival were significantly higher for patients presenting with higher values of initial ETCO2 (>45 mmHg): 3.59 [95% CI, 2.19 to 5.85] P = 0.001 and 5.02 [95% CI, 2.25 to 11.23] P = 0.001, respectively. On the contrary ETCO2 levels less than 20 mmHg were associated with significantly poorer outcomes. CONCLUSION Patients with a nonshockable out-of-hospital cardiac arrest who presented with higher values of initial ETCO2 had an increased chance of sustained ROSC and survival. This finding could help decision making as regards continuation of resuscitation efforts. Correspondence to Michael Poppe, Universitätsklinik für Notfallmedizin, Medizinische Universität Wien, Währinger Gürtel 18–20/6D, 1090 Wien, Austria Tel: +43 14040019640; fax: +43 14040019650; e-mail: michael.poppe@meduniwien.ac.at © 2019 European Society of Anaesthesiology

Awake craniotomy: anesthetic considerations based on outcome evidence
Purpose of review This review highlights anaesthesia management options for awake craniotomy and discusses the advantages and disadvantages of different approaches, intraoperative complications and future directions. Recent findings For lesions located within or adjacent to eloquent regions of the brain, awake craniotomy allows maximal tumour resection with minimal consequences on neurological function. Various techniques have been described to provide anaesthesia or sedation and analgesia during the initial craniotomy, and rapid return to consciousness for intraoperative testing and tumour resection; there is no evidence that one approach is superior to another. Although very safe, awake craniotomy is associated with some well recognized complications; most are minor and self-limiting or easily reversed. In experienced hands, failure of awake craniotomy occurs in fewer than 2% of cases, irrespective of anaesthesia technique. Although brain tumour surgery remains the most common indication for awake craniotomy, the technique is finding utility in other neurosurgical procedures. Summary Several anaesthetic approaches are available for the management of patients during awake craniotomy. The choice of technique should be based on individual patient factors, location and duration of surgery, and anaesthesiologist expertise and experience. Appropriate patient selection and excellent multidisciplinary team working is associated with high levels of procedural success and patient satisfaction. Correspondence to Martin Smith, MBBS, FRCA, FFICM, Department of Neuroanaesthesia and Neurocritical Care Unit, The National Hospital for Neurology and Neurosurgery, University College London Hospitals, Queen Square, London WC1N 3BG, UK;. E-mail: martin.smith@ucl.ac.uk Copyright © 2019 YEAR Wolters Kluwer Health, Inc. All rights reserved.
Anesthesia and airway management for gastrointestinal endoscopic procedures outside the operating room
Purpose of review To review the anesthestic and airway management for gastrointestinal procedures outside of the operating room. Recent findings The number of gastrointestinal endoscopic procedures performed is steadily increasing worldwide. As complexity, duration and invasiveness of procedures increase, there is ever greater requirement for deeper sedation or general anesthesia. A close relationship between anesthetic practitioners and endoscopists is required to ensure safe and successful outcomes. The American Society of Gastrointestinal endoscopy and the British Society of Gastroenterology have recently released guidelines for sedation and general anesthesia in gastrointestinal endoscopy, highlighting the need for careful monitoring for all cases, and anesthetic expertise in complex cases. The recent advances in high-flow nasal oxygenation in sedation may provide alternative options for oxygenation during gastrointestinal sedation, especially in deep sedation and this may reduce the need for general anesthesia. Summary The advances in gastrointestinal endoscopic intervention have increased the requirement for deep sedation and anesthetic involvement outside of the operating room. Careful titration of anesthetic intervention and close monitoring are required to ensure patient safety. Correspondence to Jaideep J. Pandit, St John's College, Oxford OX1 3JP, UK. Tel: +44 1865 221590; e-mail: jaideep.pandit@dpag.ox.ac.uk Copyright © 2019 YEAR Wolters Kluwer Health, Inc. All rights reserved.
Anesthesia practice for endovascular therapy of acute ischemic stroke in Europe
Purpose of review Anesthetic assistance is often required during endovascular therapy (EVT) of large vessel occlusion in patients with acute ischemic stroke. It is currently debated whether EVT should be performed under general anesthesia or conscious sedation. This review will summarize the recent literature with emphasis on the influence of anesthesia method on neurological outcome. Recent findings Recent randomized trials have reported no difference in outcome after EVT performed under either conscious sedation or general anesthesia. This is in contrast to a substantial number of retrospective studies, which found that EVT performed under general anesthesia was associated with a worse neurologic outcome compared with conscious sedation. Anesthetic drugs affect vessel tone and the level of blood pressure may influence outcome. The most favorable choice of anesthetic agents and ventilatory strategy is still debated. Summary The optimal anesthetic practice for EVT remains to be identified. Currently, conscious sedation is often an easy first-line strategy, but general anesthesia can be considered an equal and safe alternative to conscious sedation when there is a carefully administered anesthetic that maintains strict hemodynamic control. Attention to ventilation is advocated. The presence of a specialized neuroanesthesiologist or otherwise dedicated anesthesia personnel is highly recommended. Correspondence to Mads Rasmussen, Section of Neuroanesthesia, Department of Anesthesia, Aarhus University Hospital, Nørrebrogade 44, 8000 Aarhus C, Denmark. Tel: +45 30566977; e-mail: mads.rasmussen@vest.rm.dk Copyright © 2019 YEAR Wolters Kluwer Health, Inc. All rights reserved.
Nonoperating room anesthesia education: preparing our residents for the future
Purpose of review Nonoperating room anesthesia (NORA) is the fastest growing segment of anesthetic practice. This review provides an overview of knowledge and trends that will need to be introduced to residents as part of their education. Recent findings Topics for the future include, but are not limited to, new medications, artificial intelligence and big data, monitoring depth of hypnosis, translational innovation and collaboration, demographic changes, financial driving forces, destination hubs, medical tourism, and new approaches to education training and self-management. Summary Implementing new medical technologies for anesthesia outside the operating room will help to successfully master this ever evolving subspecialty. Anesthesiologists require specific preparation for the diverse settings that they will encounter during their training. In this rapidly changing field, cognitive fitness must be factored into teaching and evaluation of residents. We describe the most important topics to consider when educating anesthesiology residents, and highlight research that addresses upcoming challenges. Correspondence to Steven D. Boggs, MD, MBA, Department of Anesthesiology, The University of Tennessee School of Health Sciences Memphis, Chandler Building, Suite 600, 877 Jefferson Avenue, Memphis, TN 38103, USA. Tel: +1 901 448 5988; e-mail: sboggs6@uthsc.edu Copyright © 2019 YEAR Wolters Kluwer Health, Inc. All rights reserved.
Anesthesia-administered sedation for endoscopic retrograde cholangiopancreatography: monitored anesthesia care or general endotracheal anesthesia?
Purpose of review The decision to undertake monitored anesthesia care (MAC) or general endotracheal anesthesia (GEA) for patients undergoing endoscopic retrograde cholangiopancreatography (ERCP) is influenced by many factors. These include locoregional practice preferences, procedure complexity, patient position, and comorbidities. We aim to review the data regarding anesthesia-administered sedation for ERCP and identify the impact of airway management on procedure success, adverse event rates and endoscopy unit efficiency. Recent findings Several studies have consistently identified patients at high risk for sedation-related adverse events during ERCP. This group includes those with higher American Society of Anesthesiologists class and (BMI). ERCP is commonly performed in the prone position, which can make the placement of an emergent advanced airway challenging. Although this may be alleviated by performing ERCP in the supine position, this technique is more technically cumbersome for the endoscopist. Data regarding the impact of routine GEA on endoscopy unit efficiency remain controversial. Summary Pursuing MAC or GEA for patients undergoing ERCP is best-approached on an individual basis. Patients at high risk for sedation-related adverse events likely benefit from GEA. Larger, multicenter randomized controlled trials will aid significantly in better delineating which sedation approach is best for an individual patient. Correspondence to Zachary L. Smith, DO, University Hospitals Digestive Health Institute, 11100 Euclid Ave, Wearn 2nd Floor, Cleveland, OH 44106, USA. Tel: +1 216 844 6172; fax: +1 216 844 7480; e-mail: zachary.smith2@uhhospitals.org Copyright © 2019 YEAR Wolters Kluwer Health, Inc. All rights reserved.
Anesthesia for peroral endoscopic myotomy in Japan
Purpose of review Peroral endoscopic myotomy (POEM) was developed in Japan as a less invasive treatment for esophageal achalasia requiring general anesthesia under positive pressure ventilation. In 2018, the Japan Gastroenterological Endoscopy Society published the first guidelines describing the standard care for POEM. Based on these guidelines, we discuss the typical approach to anesthesia during POEM for the management of esophageal achalasia in Japan. Recent findings Prior cleansing of the esophagus is essential to prevent both aspiration during induction of anesthesia and contamination of the mediastinum and thoracic/abdominal cavity by esophageal remnants after endoscopic resection of the esophageal mucosa. Although rare, adverse events related to intraoperative carbon dioxide insufflation occur. These are treated through percutaneous needle decompression and insertion of a chest drainage tube for pneumoperitoneum and pneumothorax, respectively. Caution should be exercised regarding the development of subcutaneous emphysema and its involvement in airway obstruction. Summary Prevention of aspiration pneumonia and adverse events related to the insufflation of carbon dioxide is essential in the management of esophageal achalasia through POEM. Close cooperation between gastrointestinal endoscopic surgeons and anesthesiologists is indispensable in POEM. Correspondence to Hiroaki Murata, Department of Anesthesiology, Nagasaki University School of Medicine, 1-7-1 Sakamoto, Nagasaki 852-8501, Japan. Tel: +81-95-819-7370; fax: +81-95-819-7373; e-mail: h-murata@nagasaki-u.ac.jp Copyright © 2019 YEAR Wolters Kluwer Health, Inc. All rights reserved.


Medicine & Science in Sports & Exercise
Optimal Approach to Load Progressions during Strength Training in Older Adults
Progressive resistance training (RT) is one of the most effective interventions for reducing age-related deficits in muscle mass and functional capacity. PURPOSE To compare four approaches to load progressions in RT for older adults to determine if an optimal method exists. METHODS 82 healthy community-dwelling older adults (71.8 + 6.2 y) performed 11 weeks of structured RT (2.5 days/week) in treatment groups differing only by the method used to increase training loads. These included percent 1RM (%1RM): standardized loads based on a percentage of the one repetition maximum (1RM); rating of perceived exertion (RPE): loads increased when perceived difficulty falls below 8/10 on the OMNI RES perceived exertion scale; repetition maximum (RM): loads increased when a target number of repetitions can be completed with a given load; repetitions in reserve (RiR): identical to RM except subjects must always maintain >1 'repetition in reserve', thus avoiding the possibility of training to temporary muscular failure. RESULTS Multiple analyses of covariance indicated no significant between-group differences on any strength (chest press 1RM; leg press 1RM) or functional performance outcome (usual walking speed, maximum walking speed, 8 foot timed up-and-go, gallon jug transfer test, 30 second sit-to-stand). The RPE group found the exercise to be significantly more tolerable and enjoyable than subjects in the RiR, RM, and %1RM groups. CONCLUSION Given the RM, RPE, %1RM, and RiR methods appear equally-effective at improving muscular strength and functional performance in an older population, we conclude that the RPE method is optimal because it is likely to be perceived as the most tolerable and enjoyable, which are two important factors determining older adults' continued participation in RT. Corresponding author: Dr. Andrew N.L. Buskard, Laboratory of Neuromuscular Research and Active Aging, Department of Kinesiology and Sport Sciences, University of Miami, 1507 Levante Avenue, Coral Gables, Florida, 33146, USA, Tel: +1 305 284 3105, Fax: +1 305 284 3003, E-mail: andrewbuskard@miami.edu No outside funding was obtained for this study, but equipment and laboratory support were provided by the University of Miami under the auspices of graduate research support for the first author's PhD in exercise physiology. The authors have no professional relationships with companies or manufacturers who will benefit from the results of the study. The results of this study do not constitute an endorsement by the American College of Sports Medicine (ACSM). The results of this study are presented clearly, honestly, and without fabrication, falsification, or inappropriate data manipulation. Accepted for Publication: 8 May 2019 © 2019 American College of Sports Medicine
The Effect of Growth Restriction on Voluntary Physical Activity Engagement in Mice
INTRODUCTION The purpose of this study was to determine the effect of growth-restriction on the biological regulation of physical activity. METHODS Using a cross-fostering, protein restricted nutritive model, mice were growth-restricted during either gestation (GUN; N=3 litters) or postnatal life (PUN; N=3 litters). At 21 days of age, all mice pups were weaned and fed a non-restrictive healthy diet for the remainder of the study. At 45 days of age mice were individually housed in cages with free moving running wheels to assess physical activity engagement. At day 70, mice were euthanized, and the nucleus accumbens was analyzed for dopamine receptor 1 expression. Skeletal muscle fiber type and cross-sectional area of the soleus, extensor digitorom longus, and diaphragm were analyzed by immunohistochemistry. The soleus from the other hind leg was evaluated for calsequestrin 1 and annexin A6 expression. RESULTS The PUN female mice (15,365 ±8,844 revolutions·day-1) had a reduction (P=0.0221) in wheel revolutions per day as compared to the GUN (38,667±8648 revolutions·day-1) and CON females (36,421.0± 6,700 revolutions·day-1). PUN female mice also expressed significantly higher Drd1compared (P=0.0247) to the other groups. PUN female soleus had a higher expression of calsequestrin 1, along with more Type IIb fibers (P=0.0398). CONCLUSION Growth-restriction during lactation reduced physical activity in female mice by reducing the central drive to be active and displayed a more fatigable skeletal muscle phenotype. Address for correspondence: David P. Ferguson, 308 W. Circle Dr. Room 27S, East Lansing, MI, 48824, 517-355-4763, Fergu312@msu.edu This project was funded by Michigan State University Department of Kinesiology Start up funds. The authors have no conflicts of interest to report, and the results of this study are not endorsed by the ACSM. The results of this study are also presented clearly and honestly, without inappropriate data manipulation, fabrication, or falsification. Accepted for Publication: 4 May 2019 © 2019 American College of Sports Medicine
The Longitudinal Associations of Fitness and Motor Skills with Academic Achievement
PURPOSE This study aimed to examine both independent and dependent longitudinal associations of physical fitness (PF) components with academic achievement. METHODS 954 4th-7th graders (9-15y [Mage=12.5y], 52% girls) from nine schools throughout Finland participated in a two-year follow-up study. Register-based academic achievement scores (grade point average [GPA]) and PF were assessed in the spring of 2013-2015. Aerobic fitness was measured with a maximal 20-m shuttle run test, muscular fitness with curl-up and push-up tests, and motor skills with a 5-leaps test and a throwing-catching combination test. Structural equation modelling was applied to examine the longitudinal associations adjusting for age, gender, pubertal stage, body fat percentage, learning difficulties and mother's education. RESULTS The change in aerobic and muscular fitness were positively associated with the change in GPA (B=0.27, 99% confidence interval CI=0.06-0.48; B=0.36, CI=0.11-0.63, respectively), while the change in motor skills were not associated with the change in GPA. Better motor skills in year 2 predicted better GPA a year later (B=0.06, CI=0.00-0.11; B=0.06, CI=0.01-0.11), while aerobic and muscular fitness did not predict GPA. GPA in year 1 predicted both aerobic (B=0.08, CI=0.01-0.15) and muscular (B=0.08, CI=0.02-0.15) fitness, and motor skills (B=0.08, CI=0.02-0.15) a year later. CONCLUSION The changes in both aerobic and muscular fitness were positively associated with change in academic achievement during adolescence, while the change in motor skills had only borderline significant association. However, better motor skills, although not systematically, independently predicted better academic achievement one year later, while aerobic or muscular fitness did not. Better academic achievement predicted better motor skills, aerobic and muscular fitness. Developmental changes in adolescence may induce parallel and simultaneous changes in academic achievement and PF. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Corresponding author: Heidi J. Syväoja, LIKES Research Centre for Physical Activity and Health, Jyväskylä, Finland, Rautpohjankatu 8, FI-40700, Finland, tel. +358 (0)400248133, fax +358207629501, heidi.syvaoja@likes.fi This study was funded by the Academy of Finland (grant 273971) and the Finnish Ministry of Education and Culture (OKM/92/626/2013). The authors declare that there are no conflicts of interest. The results of the present study do not constitute endorsement by ACSM. The Authors declare that the results of the study are presented clearly, honestly, and without fabrication, falsification, or inappropriate data manipulation. Accepted for publication: 26 April 2019. © 2019 American College of Sports Medicine
Estimating Tibial Stress throughout the Duration of a Treadmill Run
Introduction Stress fractures of the tibia are a problematic injury amongst runners of all levels. Quantifying tibial stress using a modelling approach provides an alternative to invasive assessments that may be used to detect changes in tibial stress during running. This study aimed to assess the repeatability of a tibial stress model and to use this model to quantify changes in tibial stress that occur throughout the course of a 40-minute prolonged treadmill run. Methods Synchronised force and kinematic data were collected during prolonged treadmill running from fourteen recreational male rearfoot runners on two separate occasions. During each session, participants ran at their preferred speed for two consecutive 20-minute runs, separated by a 2-minute pause. The tibia was modelled as a hollow ellipse and bending moments and stresses at the distal 1/3 of the tibia were estimated using beam theory combined with inverse dynamics and musculoskeletal modelling. Results Intraclass correlation coefficients indicated good-to-excellent repeatability for peak stress values between sessions. Peak anterior and posterior stresses increased following 20 minutes of prolonged treadmill running and were 15% and 12% greater respectively after 40 minutes of running compared with the start of the run. Conclusion The hollow elliptical tibial model presented is a repeatable tool that can be utilised to assess within-participant changes in peak tibial stress during running. The increased stresses observed during a prolonged treadmill run may have implications for the development of tibial stress fracture. Corresponding author: Hannah Rice, PhD, Sport and Health Sciences, Richards Building, St Luke's Campus, Heavitree Road, Exeter, EX1 2LU, UK, H.Rice@exeter.ac.uk This research was supported by Brooks Running Company, Seattle, WA, USA. The authors declare no conflicts of interest. The results of the study are presented clearly, honestly, and without fabrication, falsification, or inappropriate data manipulation. The results of the present study do not constitute endorsement by ACSM. Accepted for Publication: 8 May 2019 © 2019 American College of Sports Medicine
Myocardial Adaptations to Competitive Swim Training
Purpose Swim training is performed in the prone or supine position and obligates water immersion, factors which may augment cardiac volume-loading more than other endurance sports. At present, prospective data defining the cardiac responses to swim training are lacking. We therefore studied myocardial adaptations among competitive swimmers in order to establish a causal relationship between swim training and left ventricular (LV) remodeling. Methods Collegiate swimmers were studied before and after a 90-day period of training intensification. Transthoracic echocardiography was used to examine LV structural and functional adaptations under resting conditions and during an acute LV afterload challenge generated by isometric handgrip testing (IHGT). A sedentary control population was identically studied with IHGT. Results In response to a discrete period of swim training intensification, athletes (n=17, 47% female, 19±0.4 years old) experienced eccentric LV remodeling, characterized by proportionally more chamber dilation than wall thickening, with attendant enhancements of resting LV systolic (LV twist) and diastolic (early and late phase tissue velocities) function. Compared to baseline and controls, athletes post training demonstrated greater systolic twist impairment during IHGT. However, training induced LV dilation coupled with gains in diastolic function offset this acquired systolic susceptibility to acute afterload resulting in relative preservation of stroke volume during IHGT. Conclusion Swim training, a sport characterized by unique cardiac loading conditions, stimulates eccentric LV remodeling with concomitant augmentation of systolic twist and diastolic relaxation. This volume mediated cardiac remodeling appears to result in greater systolic susceptibility to acute afterload challenge. Further work is required to establish how training-induced changes in function translate to human performance and whether these are accompanied by physiologic trade-offs with relevance to common forms of heart disease. Address for correspondence: Aaron Baggish, M.D. Cardiovascular Performance Program Massachusetts General Hospital 55 Fruit Street, Yawkey 5B Boston, MA, 02114 Email: abaggish@partners.org This study was funded in part by a research grant from the American Heart Association FTF2220328 (A.L.B.). The results of this study are presented clearly, honestly and without fabrication, falsification, or inappropriate data manipulation. The results of the present study do not constitute endorsement by the ACSM. CONFLICTS OF INTEREST: The authors have no conflicts of interest to report. Accepted for publication: 22 April 2019. © 2019 American College of Sports Medicine
The Physiological Roles of Carnosine and β-Alanine in Exercising Human Skeletal Muscle
Carnosine (β-alanyl-L-histidine) plays an important role in exercise performance and skeletal muscle homeostasis. Dietary supplementation with the rate-limiting precursor β-alanine leads to an increase in skeletal muscle carnosine content, which further potentiates its effects. There is significant interest in carnosine and β-alanine across athletic and clinical populations. Traditionally, attention has been given to performance outcomes with less focus on the underlying mechanism(s). Putative physiological roles in human skeletal muscle include acting as an intracellular pH buffer, modulating energy metabolism, regulating Ca2+ handling and myofilament sensitivity, and scavenging of reactive species. Emerging evidence shows that carnosine could also act as a cytoplasmic Ca2+–H+ exchanger and form stable conjugates with exercise-induced reactive aldehydes. The enigmatic nature of carnosine means there is still much to learn regarding its actions and applications in exercise, health and disease. In this review, we examine the research relating to each physiological role attributed to carnosine, and its precursor β-alanine, in exercising human skeletal muscle. Corresponding Author: Prof. Craig Sale, Nottingham Trent University, Erasmus Darwin Building, Clifton Lane, Nottingham, United Kingdom, NG11 8NS. Tel: 0115 8483505, craig.sale@ntu.ac.uk No funding was received for writing this manuscript. GGA has been supported financially by Fundação de Amparo à Pesquisa do Estado de São Paulo (FASESP; grant number: 2014/11948-8). MDT has received a British Council award to support a studentship focused on research into carnosine (grant number: 209524711). JJM, GGA, and MDT collectively declare that they have no competing interests. CS has received β-alanine supplements free of charge from Natural Alternatives International (NAI) for use in experimental investigations; NAI have also supported open access page charges for some manuscripts. The review is presented honestly, and without fabrication, falsification, or inappropriate data manipulation. The viewpoints expressed in the review do not constitute endorsement by the American College of Sports Medicine. Accepted for Publication: 29 April 2019 © 2019 American College of Sports Medicine
Effects of Instrument-assisted Soft Tissue Mobilization on Musculoskeletal Properties
Purpose Instrument-assisted soft tissue mobilization (IASTM) has been reported to improve joint range of motion (flexibility). However, it is not clear whether this change in the joint range of motion is accompanied by any alterations in the mechanical and/or neural properties. This study aimed to investigate the effects of IASTM in plantar flexors and Achilles tendon on the mechanical and neural properties of them. Methods This randomized, controlled, crossover study included 14 healthy volunteers (11 men and 3 women, 21–32 y). IASTM was performed on the skin over the posterior part of the lower leg for 5 min and targeted the soft tissues (gastrocnemii, soleus, and tibialis posterior muscles; overlying deep fascia; and Achilles tendon). As a control condition, the same participants rested for 5 min between pre and post measurements without IASTM on a separate day. The maximal ankle joint dorsiflexion angle (dorsiflexion range of motion), peak passive torque (stretch tolerance), and ankle joint stiffness (slope of the relationship between passive torque and ankle joint angle) during measurement of dorsiflexion range of motion and muscle stiffness of the triceps surae (using shear wave elastography) were measured before and immediately after the interventions. Results Following IASTM, the dorsiflexion range of motion significantly increased by 10.7 ± 10.8% and ankle joint stiffness significantly decreased by -6.2 ± 10.1%. However, peak passive torque and muscle stiffness did not change. All variables remained unchanged in the repeated measurements of controls. Conclusion IASTM can improve joint range of motion, without affecting the mechanical and neural properties of the treated muscles. Corresponding author: Naoki Ikeda, Faculty of Sport Sciences, Waseda University, Mikajima 2-579-15, Tokorozawa, Saitama 359-1192, Japan, Phone: +81-4-2947-6766, Fax: +81-4-2947-6766, E-mail: n.ikeda2@kurenai.waseda.jp This study was supported by JSPS KAKENHI (grant number 16H01870). The results of this study are presented clearly, honestly, and without fabrication, falsification, or inappropriate data manipulation, and the results of the present study do not constitute endorsement by the American College of Sports Medicine. The authors declare no conflict of interest. None of the authors has a professional relationship with any company or manufacturer who will benefit from the results of the present study. Accepted for Publication: 8 April 2019 © 2019 American College of Sports Medicine
Low-Carbohydrate Training Increases Protein Requirements of Endurance Athletes
Introduction Training with low-carbohydrate (CHO) availability enhances markers of aerobic adaptation and has become popular to periodize throughout an endurance-training program. However, exercise-induced amino acid oxidation is increased with low muscle glycogen, which may limit substrate availability for post-exercise protein synthesis. We aimed to determine the impact of training with low-CHO availability on estimates of dietary protein requirements. Methods Eight endurance-trained males (27±4y, 75±10kg, 67±10ml·kg body mass-1·min-1) completed two trials matched for energy and macronutrient composition but with differing CHO periodization. In the low-CHO availability trial (LOW), participants consumed 7.8g CHO·kg-1 prior to evening high-intensity interval training (HIIT; 10 x 5 min at 10-km race pace, 1 min rest) and subsequently withheld CHO post-exercise (0.2g·kg-1). In the high-CHO availability trial (HIGH), participants consumed 3g CHO·kg-1during the day before HIIT, and consumed 5g CHO·kg-1that evening to promote muscle glycogen resynthesis. A 10km run (~80% HRmax) was performed the following morning, fasted (LOW) or 1h after consuming 1.2g CHO·kg-1 (HIGH). Whole-body phenylalanine flux (PheRa) and oxidation (PheOx) were determined over 8h of recovery via oral [13C]phenylalanine ingestion, according to standard indicator amino acid oxidation methodology, while consuming sufficient energy, 7.8g CHO·kg-1·d-1, and suboptimal protein (0.93g·kg-1·d-1). Results Fat oxidation (indirect calorimetry) during the 10-km run was higher in LOW compared to HIGH (0.99±0.35 vs. 0.60±0.26 g·min-1, p<0.05). PheRa during recovery was not different between trials (p>0.05) whereas PheOX (reciprocal of protein synthesis) was higher in LOW compared to HIGH (8.8±2.7 vs. 7.9±2.4 umol·kg-1·h-1, p<0.05), suggesting a greater amino acid requirement to support rates of whole-body protein synthesis. Conclusion Our findings suggest that performing endurance exercise with low-CHO availability increases protein requirements of endurance athletes. Address for Correspondence: Daniel R. Moore, Assistant Professor, Faculty of Kinesiology and Physical Education, University of Toronto, 100 Devonshire Place Toronto, ON M5S 2C9, CANADA, Tel:416-946-4088, Email: dr.moore@utoronto.ca This study was supported by grants to DRM from Ajinomoto Innovation Alliance Program, Canadian Foundations for Innovation and Ontario Research Fund. JBG held a Canadian Institutes of Health Research Postdoctoral Fellowship. The results of this study are presented clearly, honestly, and without fabrication, falsification, or inappropriate data manipulation. The results of the present study do not constitute endorsement by ACSM. The authors report no conflicts of interest. Accepted for Publication: 6 May 2019 © 2019 American College of Sports Medicine
One Week of Step Reduction Lowers Myofibrillar Protein Synthesis Rates in Young Men
PURPOSE Across the lifespan, physical activity levels decrease and time spent sedentary typically increases. However, little is known about the impact that these behavioural changes have on skeletal muscle mass regulation. The primary aim of this study was to use a step reduction model to determine the impact of reduced physical activity and increased sedentary time on daily myofibrillar protein synthesis rates in healthy young men. METHODS Eleven men (22±2 y) completed 7 days of habitual physical activity (HPA) followed by 7 days of step reduction (SR). Myofibrillar protein synthesis rates were determined during HPA and SR using the deuterated water (2H2O) method combined with the collection of skeletal muscle biopsies and daily saliva samples. Gene expression of selected proteins related to muscle mass regulation and oxidative metabolism were determined via real time RT-qPCR. RESULTS Daily step count was reduced by approximately 91% during SR (from 13054±2763 to 1192±330 steps·d-1; P<0.001) and this led to an increased contribution of sedentary time to daily activity (73±6 to 90±3%; P<0.001). Daily myofibrillar protein synthesis decreased by approximately 27% from 1.39±0.32 %·d-1 during HPA to 1.01±0.38 %·d-1 during SR (P<0.05). MAFbx and myostatin mRNA expression were up-regulated whereas mTOR, p53 and PDK4 mRNA expression were down-regulated following SR (P<0.05). CONCLUSION One week of reduced physical activity and increased sedentary time substantially lowers daily myofibrillar protein synthesis rates in healthy young men. Corresponding author: Dr Gareth A. Wallis, School of Sport, Exercise and Rehabilitation Sciences, University of Birmingham, Edgbaston, B15 2TT, UK. Phone: +44(0) 121 414 4129. Email: g.a.wallis@bham.ac.uk B.J.S is funded by a University of Birmingham 'Exercise as Medicine' PhD studentship. None of the authors have any conflicts of interest or financial disclosures to declare. The results of the present study are presented clearly, honestly, and without fabrication, falsification, or inappropriate data manipulation and do not constitute endorsement by the American College of Sports Medicine. Accepted for publication: 2 May 2019. © 2019 American College of Sports Medicine
Protein Supplementation Does Not Augment Adaptations to Endurance Exercise Training
Introduction Recently, it has been speculated that protein supplementation may further augment the adaptations to chronic endurance exercise training. We assessed the impact of protein supplementation during chronic endurance exercise training on whole-body oxidative capacity (VO2max) and endurance exercise performance. Methods In this double-blind, randomized, parallel placebo-controlled trial, sixty recreationally active males (age: 27±6 y; BMI: 23.8±2.6 kg·m-2; VO2max: 47±6 mL·min-1·kg-1) were subjected to 12 weeks of triweekly endurance exercise training. After each session and each night before sleep, participants ingested either a protein supplement (PRO; 28.7 g casein protein) or an isoenergetic carbohydrate placebo (PLA). Before and after the 12 weeks of training, VO2max and endurance exercise performance (~10-km time-trial) were assessed on a cycle ergometer. Muscular endurance (total workload achieved during 30 reciprocal isokinetic contractions) was assessed by isokinetic dynamometry and body composition by DXA. Mixed-model ANOVA was applied to assess whether training adaptations differed between groups. Results Endurance exercise training induced an 11±6% increase in VO2max (time effect, P<0.0001), with no differences between groups (PRO: 48±6 to 53±7 mL·min-1·kg-1; PLA: 46±5 to 51±6 mL·min-1·kg-1; time×treatment interaction, P=0.50). Time to complete the time-trial was reduced by 14±7% (time effect, P<0.0001), with no differences between groups (time×treatment interaction, P=0.15). Muscular endurance increased by 6±7% (time effect, P<0.0001), with no differences between groups (time×treatment interaction, P=0.84). Leg lean mass showed an increase following training (P<0.0001), which tended to be greater in PRO compared with PLA (0.5±0.7 vs 0.2±0.6 kg, respectively; time×treatment interaction, P=0.073). Conclusion Protein supplementation after exercise and before sleep does not further augment the gains in whole-body oxidative capacity and endurance exercise performance following chronic endurance exercise training in recreationally active, healthy young males. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. These authors contributed equally to this work, Kristin L. Jonvik, Kevin J.M. Paulussen Address for correspondence: Dr. Jan-Willem van Dijk, Institute of Sports and Exercise Studies, HAN University of Applied Sciences, PO Box 6960, 6503 GL Nijmegen, The Netherlands, Tel: +(31) 6 55227849, Email: janwillem.vandijk@han.nl This project was funded by the Dutch Ministry of Economic Affairs (Topsector Agri&Food), award number: AF16501 (PPS Allowance). KLJ, KJMP, SLD and IJMC declare that they have no conflict of interest. FCW, LJCvL and JWvD have received research grants, consulting fees, and/or speaking honoraria from FrieslandCampina. LJCvL has received research grants, consulting fees, and speaking honoraria from Pepsico/Gatorade. AMHH is an employee at FrieslandCampina. Accepted for Publication: 22 April 2019 © 2019 American College of Sports Medicine


Critical Care Medicine
Monocyte Distribution Width: A Novel Indicator of Sepsis-2 and Sepsis-3 in High-Risk Emergency Department Patients
Objectives: Most septic patients are initially encountered in the emergency department where sepsis recognition is often delayed, in part due to the lack of effective biomarkers. This study evaluated the diagnostic accuracy of peripheral blood monocyte distribution width alone and in combination with WBC count for early sepsis detection in the emergency department. Design: An Institutional Review Board approved, blinded, observational, prospective cohort study conducted between April 2017 and January 2018. Setting: Subjects were enrolled from emergency departments at three U.S. academic centers. Patients: Adult patients, 18–89 years, with complete blood count performed upon presentation to the emergency department, and who remained hospitalized for at least 12 hours. A total of 2,212 patients were screened, of whom 2,158 subjects were enrolled and categorized per Sepsis-2 criteria, such as controls (n = 1,088), systemic inflammatory response syndrome (n = 441), infection (n = 244), and sepsis (n = 385), and Sepsis-3 criteria, such as control (n = 1,529), infection (n = 386), and sepsis (n = 243). Interventions: The primary outcome determined whether an monocyte distribution width of greater than 20.0 U, alone or in combination with WBC, improves early sepsis detection by Sepsis-2 criteria. Secondary endpoints determined monocyte distribution width performance for Sepsis-3 detection. Measurements and Main Results: Monocyte distribution width greater than 20.0 U distinguished sepsis from all other conditions based on either Sepsis-2 criteria (area under the curve, 0.79; 95% CI, 0.76–0.82) or Sepsis-3 criteria (area under the curve, 0.73; 95% CI, 0.69–0.76). The negative predictive values for monocyte distribution width less than or equal to 20 U for Sepsis-2 and Sepsis-3 were 93% and 94%, respectively. Monocyte distribution width greater than 20.0 U combined with an abnormal WBC further improved Sepsis-2 detection (area under the curve, 0.85; 95% CI, 0.83–0.88) and as reflected by likelihood ratio and added value analyses. Normal WBC and monocyte distribution width inferred a six-fold lower sepsis probability. Conclusions: An monocyte distribution width value of greater than 20.0 U is effective for sepsis detection, based on either Sepsis-2 criteria or Sepsis-3 criteria, during the initial emergency department encounter. In tandem with WBC, monocyte distribution width is further predicted to enhance medical decision making during early sepsis management in the emergency department. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website (http://journals.lww.com/ccmjournal). Supported, in part, by a grant from Beckman Coulter. Drs. Crouser's, Parrillo's, Bicking's, Peck-Palmer's, Julian's, Kleven's, Raj's, and Procopio's institutions received funding from Beckman Coulter. Dr. Crouser's institution received funding from Foundation for Sarcoidosis Research and the National Institutes of Health (NIH), received funding from ATyr Pharmaceutical (consulting), and disclosed that he designed the trial in coordination with Beckman Coulter. Dr. Parrillo received funding from National Heart, Lung, and Blood Institute-NIH Heart Failure Network, consulting fees for some of the work performed, and Asahi-Kasei America (consulting). Dr. Seymour's institution received funding from the NIH and received support for article research from the NIH. Drs. Seymour, Angus, and Esguerra received funding from Beckman Coulter. Drs. Bicking, Esguerra, Kleven, Raj, Procopio, and Tejidor disclosed off-label product use of Beckman Coulter equipment used to measure monocyte distribution width, the entity under study and described in the article. Dr. Esguerra received speaker honoraria for presenting data related to the pilot study to audiences internationally in Brussels and in Hong Kong. Dr. Peck-Palmer's institution received funding from Roche Diagnostics. Dr. Magari disclosed that he and his spouse are Beckman Coulter employees. Drs. Magari and Tejidor disclosed work for hire. Drs. Careaga and Tejidor disclosed that they are Beckman Coulter employees. For information regarding this article, E-mail: elliott.crouser@osumc.edu Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Continuation of Newly Initiated Midodrine Therapy After Intensive Care and Hospital Discharge: A Single-Center Retrospective Study
Objectives: Midodrine is an α1-agonist approved for orthostatic hypotension. Recently, it has received attention as an oral vasopressor to facilitate ICU discharge. The purpose of this study was to identify the incidence of continuation of newly initiated midodrine upon ICU and hospital discharge and identify risk factors associated with its occurrence. Design: Single-center retrospective study. Setting: ICU patients from January 2011 to October 2016 at Mayo Clinic, Rochester. Patients: Adult patients admitted to any ICU who received new midodrine for hypotension and survived to discharge. Interventions: None. Measurements and Main Results: During the study period, 1,010 patients were newly started on midodrine and survived to ICU discharge. Midodrine was continued in 67% (672/1,010) of patients at ICU discharge. Admission to cardiovascular surgery ICU and mixed medical/surgical ICU was a risk factor for midodrine continuation at ICU discharge (odds ratio, 3.94 [2.50–6.21] and 2.03 [1.29–3.20], respectively). At hospital discharge, 34% (311/909) of patients were continued on midodrine therapy. History of congestive heart failure predicted midodrine continuation at hospital discharge (odds ratio, 1.49 [1.05–2.12]). Hypertension and use of mechanical ventilation were associated with a decreased odds of midodrine prescription at both ICU and hospital discharge. Of those discharged from the ICU or hospital on midodrine, 50% were concomitantly prescribed antihypertensives. Discharge from the ICU on midodrine was associated with a significantly shorter ICU length of stay (7.5 ± 8.9 vs 10.6 ± 13.4 d) and reduced risk of in-hospital mortality (hazard ratio, 0.47 [95% CI, 0.32–0.70]; p < 0.001), despite no difference in baseline severity of illness scores. In contrast, patients discharged from the hospital on midodrine had a higher risk of 1-year mortality (hazard ratio, 1.60 [95% CI, 1.26–2.04]; p < 0.001). Conclusions: This study established a high prevalence of midodrine continuation in transitions of care. The risks and benefits of this practice remain unclear. Future studies should explore the impact of this practice on patient outcomes and resource utilization. These insights could be used to model interventions for proper tapering, discontinuation, or follow-up of new start midodrine. This work was performed at the Mayo Clinic, Rochester, MN. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website (http://journals.lww.com/ccmjournal). Drs. Nei and Barreto disclosed off-label product use of midodrine. Dr. Barreto received funding from Fast Medical. The remaining authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: barreto.erin@mayo.edu Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Association Between Critical Care Admissions and Cognitive Trajectories in Older Adults
Objectives: Patients requiring admission to an ICU may subsequently experience cognitive decline. Our objective was to investigate longitudinal cognitive trajectories in older adults hospitalized in ICUs. We hypothesized that individuals hospitalized for critical illness develop greater cognitive decline compared with those who do not require ICU admission. Design: A retrospective cohort study using prospectively collected cognitive scores of participants enrolled in the Mayo Clinic Study of Aging and ICU admissions retrospectively ascertained from electronic medical records. A covariate-adjusted linear mixed effects model with random intercepts and slopes assessed the relationship between ICU admissions and the slope of global cognitive z scores and domains scores (memory, attention/executive, visuospatial, and language). Setting: ICU admissions and cognitive scores in the Mayo Clinic Study of Aging from October 1, 2004, to September 11, 2017. Patients: Nondemented participants age 50 through 91 at enrollment in the Mayo Clinic Study of Aging with an initial cognitive assessment and at least one follow-up visit. Interventions: None. Measurements and Main Results: Of 3,673 participants, 372 had at least one ICU admission with median (25–75th percentile) follow-up after first ICU admission of 2.5 years (1.2–4.4 yr). For global cognitive z score, admission to an ICU was associated with greater decline in scores over time compared with participants not requiring ICU admission (difference in annual slope = –0.028; 95% CI, –0.044 to –0.012; p < 0.001). ICU admission was associated with greater declines in memory (–0.029; 95% CI, –0.047 to –0.011; p = 0.002), attention/executive (–0.020; 95% CI, –0.037 to –0.004; p = 0.016), and visuospatial (–0.013; 95% CI, –0.026 to –0.001; p = 0.041) domains. ICU admissions with delirium were associated with greater declines in memory (interaction p = 0.006) and language (interaction p = 0.002) domains than ICU admissions without delirium. Conclusions: In older adults, ICU admission was associated with greater long-term cognitive decline compared with patients without ICU admission. These findings were more pronounced in those who develop delirium while in the ICU. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Drs. Schulte, Warner, Rabinstein, and Sprung contributed to conception and design of the work, interpretation of data, analysis and interpretation of data, drafting of the work and revisions for important intellectual content, final approval. Drs. Martin, Mielke, Knopman, Petersen, Weingarten, and Warner contributed to critical revisions of the work for important intellectual content and final approval: Dr. Deljou contributed to acquisition and interpretation of data. Drs. Hanson and Schroeder contributed to data analysis, statistical work, critical revisions of the work for important intellectual content, and final approval. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website (http://journals.lww.com/ccmjournal). Supported, in part, by the National Institutes of Health grants U01 AG006786 (principal investigator [PI]: to R.C.P), P50 AG016574 (PI: to R.C.P), RF1 AG55151 (PI: to M.M.M), by the Robert H. and Clarice Smith and Abigail van Buren Alzheimer's Disease Research Program, the Rochester Epidemiology Project (R01 AG034676) and the Mayo Clinic Center for Translational Sciences Activities (CTSA), grant number UL1 TR000135 from the National Center for Advancing Translational Sciences (NCATS). In addition, this study was supported, in part, by CTSA Grant Number KL2 TR002379 to M.A.W from NCATS. Drs. Schulte, Mielke, Knopman, and Petersen received support for article research from the National Institutes of Health. Dr. Martin disclosed that he serves on the Board of Directors for the American Society of Anesthesiologists (ASA), he receives an annual stipend from ASA for contributions to the Patient Safety Continuing Medical Education Editorial Board, and he and Mayo Clinic owns equity and receives royalties for work licensed through Mayo Clinic to Nevro, Inc, a publicly held company, for contributions related to the use of nerve signal modulation to treat central, autonomic, and peripheral nervous system disorders, including pain. Dr. Mielke's institution received funding from Biogen and Lunbeck, and she received funding from Eli Lilly. Dr. Knopman received funding from Washington University (Data Safety Monitoring Board activities). Dr. Petersen's institution received funding from National Institute on Aging; he received funding from Roche (consultant), Merck (consultant), Genentech (DSMB), Biogen, Eisai, and GE Healthcare; and he received other support from benefactors through the Mayo Clinic. Dr. Weingarten received funding from Medtronic (served as chairman of the clinical event committee of the PRediction of Opioid-induced respiratory Depression In patients monitored by capnoGraphY (PRODIGY) trial, which examined postoperative capnography monitoring) and Merck (received an investigator-initiated, unrestricted grant to study the effects of sugammadex on return of postoperative bowel function), and he disclosed work for hire. Dr. Warner's institution received funding from Center for Translational Sciences Activities Grant Number KL2 TR002379 from the National Center for Advancing Translational Science. The remaining authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: schulte.phillip@mayo.edu Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Early Enteral Nutrition in Patients Undergoing Sustained Neuromuscular Blockade: A Propensity-Matched Analysis Using a Nationwide Inpatient Database
Objectives: Whether enteral nutrition should be postponed in patients undergoing sustained treatment with neuromuscular blocking agents remains unclear. We evaluated the association between enteral nutrition initiated within 2 days of sustained neuromuscular blocking agent treatment and in-hospital mortality. Design: Retrospective administrative database study from July 2010 to March 2016. Setting: More than 1,200 acute care hospitals covering approximately 90% of all tertiary-care emergency hospitals in Japan. Patients: Mechanically ventilated patients, who had undergone sustained treatment with neuromuscular blocking agents in an ICU, were retrospectively reviewed. We defined patients who received sustained treatment with neuromuscular blocking agents as those who received either rocuronium at greater than or equal to 250 mg/d or vecuronium at greater than or equal to 50 mg/d for at least 2 consecutive days. Interventions: Enteral nutrition started within 2 days from the initiation of neuromuscular blocking agents (defined as early enteral nutrition). Measurements and Main Results: We identified 2,340 eligible patients during the 69-month study period. Of these, 378 patients (16%) had received early enteral nutrition. One-to-three propensity score matching created 374–1,122 pairs. The in-hospital mortality rate was significantly lower in the early than late enteral nutrition group (risk difference, –6.3%; 95% CI, –11.7% to –0.9%). There was no significant difference in the rate of hospital pneumonia between the two groups (risk difference, 2.8%; 95% CI, –2.7% to 8.3%). Length of hospital stay among survivors was significantly shorter in the early compared with the late enteral nutrition group (risk difference, –11.4 d; 95% CI, –19.1 to –3.7 d). There was no significant difference between the two groups in length of ICU stay or length of mechanical ventilation among survivors. Conclusions: According to this retrospective database study, early enteral nutrition may be associated with lower in-hospital mortality with no increase in-hospital pneumonia in patients undergoing sustained treatment with neuromuscular blocking agents. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website (http://journals.lww.com/ccmjournal). Supported, in part, by grants from the Ministry of Health, Labour and Welfare of Japan (H30-Policy-Designated-004 and H29-ICT-Genral-004) and the Ministry of Education, Culture, Sports, Science and Technology of Japan (17H04141). Dr. Ohbe received support for article research from the Ministry of Health, Labour and Welfare of Japan and the Ministry of Education, Culture, Sports, Science and Technology of Japan. Dr. Yasunaga's institution received funding from the Ministry of Health, Labour and Welfare, Japan and the Ministry of Education, Culture, Sports, Science and Technology, Japan. The remaining authors have disclosed that they do not have any potential conflicts of interest. This work was performed at The University of Tokyo. For information regarding this article, E-mail: hohbey@gmail.com Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Trends and Outcomes in Sepsis Hospitalizations With and Without Atrial Fibrillation: A Nationwide Inpatient Analysis
Objectives: Atrial fibrillation is frequently seen in sepsis-related hospitalizations. However, large-scale contemporary data from the United States comparing outcomes among sepsis-related hospitalizations with versus without atrial fibrillation are limited. The aim of our study was to assess the frequency of atrial fibrillation and its impact on outcomes of sepsis-related hospitalizations. Design: Retrospective cohort study. Setting: The National Inpatient Sample databases (2010–2014). Patients: Primary discharge diagnosis of sepsis with and without atrial fibrillation were identified using prior validated International Classification of Diseases, 9th Edition, Clinical Modification codes. Interventions: None. Measurements and Main Results: Overall, 5,808,166 hospitalizations with the primary diagnosis of sepsis, of which 19.4% (1,126,433) were associated with atrial fibrillation. The sepsis-atrial fibrillation cohort consisted of older (median [interquartile range] age of 79 yr [70–86 yr] vs 67 yr [53–79 yr]; p < 0.001) white (80.9% vs 68.8%; p < 0.001) male (51.1% vs 47.5%; p < 0.001) patients with an extended length of stay (median [interquartile range] 6 d [4–11 d] vs 5 d [3–9 d]; p < 0.001) and higher hospitalization charges (median [interquartile range] $44,765 [$23,234–$88,657] vs $35,737 [$18,767–$72,220]; p < 0.001) as compared with the nonatrial fibrillation cohort. The all-cause mortality rate in the sepsis-atrial fibrillation cohort was significantly higher (18.4% and 11.9%; p = 0.001) as compared with those without atrial fibrillation. Although all-cause mortality (20.4% vs 16.6%) and length of stay (median [interquartile range] 7 d [4–11 d] vs 6 d [4–10 d]) decreased between 2010 and 2014, hospitalization charges increased (median [interquartile range] $41,783 [$21,430–$84,465] vs $46,251 [$24,157–$89,995]) in the sepsis-atrial fibrillation cohort. The greatest predictors of mortality in the atrial fibrillation-sepsis cohort were African American race, female gender, advanced age, and the presence of medical comorbidities. Conclusions: The presence of atrial fibrillation among sepsis-related hospitalizations is a marker of poor prognosis and increased mortality. Although we observed rising trends in sepsis and sepsis-atrial fibrillation–related hospitalizations during the study period, the rate and odds of mortality progressively decreased. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website (http://journals.lww.com/ccmjournal). The authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: rsachdeva@msm.edu Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Argon Inhalation for 24 Hours After Onset of Permanent Focal Cerebral Ischemia in Rats Provides Neuroprotection and Improves Neurologic Outcome
Objectives: We tested the hypothesis that prolonged inhalation of 70% argon for 24 hours after in vivo permanent or temporary stroke provides neuroprotection and improves neurologic outcome and overall recovery after 7 days. Design: Controlled, randomized, double-blinded laboratory study. Setting: Animal research laboratories. Subjects: Adult Wistar male rats (n = 110). Interventions: Rats were subjected to permanent or temporary focal cerebral ischemia via middle cerebral artery occlusion, followed by inhalation of 70% argon or nitrogen in 30% oxygen for 24 hours. On postoperative day 7, a 48-point neuroscore and histologic lesion size were assessed. Measurements and Main Results: After argon inhalation for 24 hours immediately following "severe permanent ischemia" induction, neurologic outcome (neuroscore, p = 0.034), overall recovery (body weight, p = 0.02), and infarct volume (total infarct volume, p = 0.0001; cortical infarct volume, p = 0.0003; subcortical infarct volume, p = 0.0001) were significantly improved. When 24-hour argon treatment was delayed for 2 hours after permanent stroke induction or until after postischemic reperfusion treatment, neurologic outcomes remained significantly improved (neuroscore, p = 0.043 and p = 0.014, respectively), as was overall recovery (body weight, p = 0.015), compared with nitrogen treatment. However, infarct volume and 7-day mortality were not significantly reduced when argon treatment was delayed. Conclusions: Neurologic outcome (neuroscore), overall recovery (body weight), and infarct volumes were significantly improved after 24-hour inhalation of 70% argon administered immediately after severe permanent stroke induction. Neurologic outcome and overall recovery were also significantly improved even when argon treatment was delayed for 2 hours or until after reperfusion. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website (http;/journals.lww.com/ccmjournal). Supported, in part, by a DREAM Award from the Department of Anesthesiology at Duke University Medical Center. Dr. Sheng's institution received funding from the National Institutes of Health (NIH). Drs. Sheng and Turner received support for article research from the NIH. Dr. Yang received support for article research from departmental funds. Dr. Hoffmann received support for article research from the Department of Duke Anesthesiology DREAM award, and he disclosed off-label product use of noble gas argon. The remaining authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: ulrike.hoffmann@duke.edu Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
At-Risk Drinking Is Independently Associated With Acute Kidney Injury in Critically Ill Patients
Objectives: Unhealthy use of alcohol and acute kidney injury are major public health problems, but little is known about the impact of excessive alcohol consumption on kidney function in critically ill patients. We aimed to determine whether at-risk drinking is independently associated with acute kidney injury in the ICU and at ICU discharge. Design: Prospective observational cohort study. Setting: A 21-bed polyvalent ICU in a university hospital. Patients: A total of 1,107 adult patients admitted over a 30-month period who had an ICU stay of greater than or equal to 3 days and in whom alcohol consumption could be assessed. Interventions: None. Measurements and Main Results: We assessed Kidney Disease Improving Global Outcomes stages 2–3 acute kidney injury in 320 at-risk drinkers (29%) and 787 non–at-risk drinkers (71%) at admission to the ICU, within 4 days after admission and at ICU discharge. The proportion of patients with stages 2–3 acute kidney injury at admission to the ICU (42.5% vs 18%; p < 0.0001) was significantly higher in at-risk drinkers than in non–at-risk drinkers. Within 4 days and after adjustment on susceptible and predisposing factors for acute kidney injury was performed, at-risk drinking was significantly associated with acute kidney injury for the entire population (odds ratio, 2.15; 1.60–2.89; p < 0.0001) in the subgroup of 832 patients without stages 2–3 acute kidney injury at admission to the ICU (odds ratio, 1.44; 1.02–2.02; p = 0.04) and in the subgroup of 971 patients without known chronic kidney disease (odds ratio, 1.92; 1.41–2.61; p < 0.0001). Among survivors, 22% of at-risk drinkers and 9% of non–at-risk drinkers were discharged with stages 2–3 acute kidney injury (p < 0.001). Conclusions: Our results suggest that chronic and current alcohol misuse in critically ill patients is associated with kidney dysfunction. The systematic and accurate identification of patients with alcohol misuse may allow for the prevention of acute kidney injury. Dr. Gacouin had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Drs. Gacouin and Tadié drafted the work. All authors contributed to the conception and design of the work, data acquisition, and analysis. All authors contributed to the interpretation of data for the work. All authors revised it critically for important intellectual content. All authors gave final approval of the version to be published. All authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website (http://journals.lww.com/ccmjournal). The authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: arnaud.gacouin@chu-rennes.fr Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Cardiac Arrest Prior to Venoarterial Extracorporeal Membrane Oxygenation: Risk Factors for Mortality
Objectives: Mortality after cardiac arrest remains high despite initiation of venoarterial extracorporeal membrane oxygenation. We aimed to identify pre-venoarterial extracorporeal membrane oxygenation risk factors of 90-day mortality in patients with witnessed cardiac arrest and with greater than or equal to 1 minute of cardiopulmonary resuscitation before venoarterial extracorporeal membrane oxygenation. The association between preimplant variables and all-cause mortality at 90 days was analyzed with multivariable logistic regression. Design: Retrospective observational cohort study. Setting: Tertiary medical center. Patients: Seventy-two consecutive patients with cardiac arrest prior to venoarterial extracorporeal membrane oxygenation cannulation. Interventions: None. Measurements and Main Results: Median age was 56 years (interquartile range, 43–56 yr), 75% (n = 54) were men. Out-of-hospital cardiac arrest occurred in 12% (n = 9) of the patients. Initial cardiac rhythm was nonshockable in 57% (n = 41) and shockable in 43% (n = 31) of patients. Median cardiopulmonary resuscitation duration was 21 minutes (interquartile range, 10–73 min; range, 1–197 min]. No return of spontaneous circulation was present in 64% (n = 46) and postarrest cardiogenic shock in 36% (n = 26) of the patients at venoarterial extracorporeal membrane oxygenation cannulation. Median duration of venoarterial extracorporeal membrane oxygenation was 5 days (interquartile range, 2–12 d). The 90-day overall mortality and in-hospital mortality were 57% (n = 41), 53% (n = 38) died during venoarterial extracorporeal membrane oxygenation, and 43% (n = 31) were successfully weaned. All survivors had Cerebral Performance Category score 1–2 at discharge to home. Multivariable logistic regression analysis identified initial nonshockable cardiac arrest rhythm (odds ratio, 12.2; 95% CI, 2.83–52.7; p = 0.001), arterial lactate (odds ratio per unit, 1.15; 95% CI, 1.01–1.31; p = 0.041), and ischemic heart disease (7.39; 95% CI, 1.57–34.7; p = 0.011) as independent risk factors of 90-day mortality, whereas low-flow duration, return of spontaneous circulation, and age were not. Conclusions: In 72 patients with cardiac arrest before venoarterial extracorporeal membrane oxygenation initiation, nonshockable rhythm, arterial lactate, and ischemic heart disease were identified as independent pre-venoarterial extracorporeal membrane oxygenation risk factors of 90-day mortality. The novelty of this study is that the metabolic state, expressed as level of lactate just before venoarterial extracorporeal membrane oxygenation initiation seems more predictive of outcome than cardiopulmonary resuscitation duration or absence of return of spontaneous circulation. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website (http://journals.lww.com/ccmjournal). The authors have disclosed that they do not have any potential conflicts of interest. This work was performed at the Karolinska University Hospital, SE-17176 Stockholm, Sweden. For information regarding this article, E-mail: thomas.fux@sll.se Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Effect of Increasing Blood Pressure With Noradrenaline on the Microcirculation of Patients With Septic Shock and Previous Arterial Hypertension
Objectives: To assess whether an increase in mean arterial pressure in patients with septic shock and previous systemic arterial hypertension changes microcirculatory and systemic hemodynamic variables compared with patients without arterial hypertension (control). Design: Prospective, nonblinded, interventional study. Setting: Three ICUs in two teaching hospitals. Patients: After informed consent, we included patients older than 18 years with septic shock for at least 6 hours, sedated, and under mechanical ventilation. We paired patients with and without arterial hypertension by age. Interventions: After obtaining systemic and microcirculation baseline hemodynamic variables (time 0), we increased noradrenaline dose to elevate mean arterial pressure up to 85–90 mm Hg before collecting a new set of measurements (time 1). Measurements and Main Results: We included 40 patients (20 in each group). There was no significant difference in age between the groups. After the rise in mean arterial pressure, there was a significant increase in cardiac index and a slight but significant reduction in lactate in both groups. We observed a significant improvement in the proportion of perfused vessels (control: 57.2 ± 14% to 66 ± 14.8%; arterial hypertension: 61.4 ± 12.3% to 70.8 ± 7.1%; groups: p = 0.29; T0 and T1: p < 0.001; group and time interaction: p = 0.85); perfused vessels density (control: 15.6 ± 4 mm/mm2 to 18.6 ± 4.5 mm/mm2; arterial hypertension: 16.4 ± 3.5 mm/mm2 to 19.1 ± 3 mm/mm2; groups: p = 0.51; T0 and T1: p < 0.001; group and time interaction: p = 0.70), and microcirculatory flow index (control: 2.1 ± 0.6 to 2.4 ± 0.6; arterial hypertension: 2.1 ± 0.5 to 2.6 ± 0.2; groups: p = 0.71; T0 and T1: p = 0.002; group and time interaction: p = 0.45) in both groups. Conclusions: Increasing mean arterial pressure with noradrenaline in septic shock patients improves density and flow in small vessels of sublingual microcirculation. However, this improvement occurs both in patients with previous arterial hypertension and in those without arterial hypertension. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website (http://journals.lww.com/ccmjournal). Supported, in part, by grant from Fundação de Pesquisa do Estado de São Paulo—Grant 2012/19051-1. Drs. Fiorese Coimbra's, de Freitas's, Bafi's, Pinheiro's, Nunes's, and Machado's institution received funding from Fundação de Amparo à Pesquisa do Estado de São Paulo—FAPESP, a government grant agency from State of São Paulo. Dr. de Azevedo disclosed that he does not have any potential conflicts of interest. Trial registration: ClinicalTrials.gov NCT02519699. For information regarding this article, E-mail: karla.tuanny@hotmail.com Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Untangling Infusion Confusion: A Comparative Evaluation of Interventions in a Simulated Intensive Care Setting
Objectives: Assess interventions' impact on preventing IV infusion identification and disconnection mix-ups. Design: Experimental study with repeated measures design. Setting: High fidelity simulated adult ICU. Subjects: Forty critical care nurses. Interventions: Participants had to correctly identify infusions and disconnect an infusion in four different conditions: baseline (current practice); line labels/organizers; smart pump; and light-linking system. Measurements and Main Results: Participants identified infusions with significantly fewer errors when using line labels/organizers (0; 0%) than in the baseline (12; 7.7%) and smart pump conditions (10; 6.4%) (p < 0.01). The light-linking system did not significantly affect identification errors (5; 3.2%) compared with the other conditions. Participants were significantly faster identifying infusions when using line labels/organizers (0:31) than in the baseline (1:20), smart pump (1:29), and light-linking (1:22) conditions (p < 0.001). When disconnecting an infusion, there was no significant difference in errors between conditions, but participants were significantly slower when using the smart pump than all other conditions (p < 0.001). Conclusions: The results suggest that line labels/organizers may increase infusion identification accuracy and efficiency. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. This work was performed at a Toronto hospital. This study was funded by Health Quality Ontario and the Association for the Advancement of Medical Instrumentation Foundation. Mr. Fan and Dr. Trbovich have received presentation honoraria from Becton Dickinson (BD), and Dr. Trbovich is the principal applicant on a grant (ROR2017-04260JH - North York General Hospital [NYGH]) from BD paid to NYGH (unrelated to this study). Ms. Koczmara's and Dr. Trbovich's institutions received funding from Health Quality Ontario (agency of the Ontario Ministry of Health and Long-term Care). Dr. Trbovich's institution received funding from Association for the Advancement of Medical Instrumentation Foundation. Dr. Trbovich's institution received funding from Becton Dickinson Canada (paid the travel costs for team members to present the research findings to their Alaris infusion product team). Ms. Pinkney, Mr. Fan, Ms. Koczmara, and Dr. Trbovich disclosed work for hire, and they disclosed off-label product use of Nurse Buddy II (Verafied Medical Innovations LLC, American Canyon, CA), and prototype Light Linking System (developed by research team solely for this research study; not for sale and no commercialization plans). Mr. Fan and Dr. Trbovich disclosed working on a project aimed at adapting pre-existing e-learning modules regarding elastomeric infusors for Baxter Canada. Ms. Koczmara received funding from a consulting honorarium paid to Institute for Safe Medication Practices Canada to attend and participate in Hospira Infusion Systems 1-day Conference "Canadian Infusion Pump Safety Collaborative Forum." Address requests for reprints to: Sonia J. Pinkney, MHSc, PEng, Institute of Health Policy, Management and Evaluation, University of Toronto, 155 College Street, Suite 425, Toronto, ON, M5T 3M6, Canada. E-mail: Sonia.pinkney@uhn.ca Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.


Health Economics
The economics of antibiotic resistance: a claim for personalised treatments


Patient responsiveness to a differential deductible: empirical results from The Netherlands

Abstract

Health insurers may use financial incentives to encourage their enrollees to choose preferred providers for medical treatment. Empirical evidence whether differences in cost-sharing rates across providers affects patient choice behavior is, especially from Europe, limited. This paper examines the effect of a differential deductible to steer patient provider choice in a Dutch regional market for varicose veins treatment. Using individual patients' choice data and information about their out-of-pocket payments covering the year of the experiment and 1 year before, we estimate a conditional logit model that explicitly controls for pre-existing patient preferences. Our results suggest that in this natural experiment designating preferred providers and waiving the deductible for enrollees using these providers significantly influenced patient choice. The average cross-price elasticity of demand is found to be 0.02, indicating that patient responsiveness to the cost-sharing differential itself was low. Unlike fixed cost-sharing differences, the deductible exemption was conditional on the patient's other medical expenses occurring in the policy year. The differential deductible did, therefore, not result in a financial benefit for patients with annual costs exceeding their total deductible.



Including intangible costs into the cost-of-illness approach: a method refinement illustrated based on the PM 2.5 economic burden in China

Abstract

The concentrations of particulate matter with aerodynamic diameters less than 2.5 µm (PM2.5) and 10 µm (PM10) is a widespread concern and has been demonstrated for 103 countries. During the past few years, the exposure–response function (ERf) has been widely used to estimate the health effects of air pollution. However, past studies are either based on the cost-of-illness or the willingness-to-pay approach, and therefore, either do not cover intangible costs or costs due to the absence of work. To address this limitation, a hybrid health effect and economic loss model is developed in this study. This novel approach is applied to a sample of environmental and cost data in China. First, the ERf is used to link PM2.5 concentrations to health endpoints of chronic mortality, acute mortality, respiratory hospital admission, cardiovascular hospital admission, outpatient visits—internal medicine, outpatient visits—pediatrics, asthma attack, acute bronchitis, and chronic bronchitis. Second, the health effect of PM2.5 is monetized into the economic loss. The mean economic loss due to PM2.5was much heavier in the North than the South of China. Furthermore, the empirical results from 76 cities in China show that the health effects and economic losses were over 4.98 million cases and 382.30 billion-yuan in 2014 and decreased dramatically compared with those in 2013.



Unemployment and suicide in Italy: evidence of a long-run association mitigated by public unemployment spending

Abstract

From the mid-1990s on, the suicide rate in Italy declined steadily, then apparently rose again after the onset of the Great Recession, along with a sharp increase in unemployment. The aim of this study is to test the association between the suicide rate and unemployment (i.e., the unemployment rate for males and females in the period 1977–2015, and the long-term unemployment rate in the period 1983–2012) in Italy, by means of cointegration techniques. The analysis was adjusted for public unemployment spending (referring to the period 1980–2012). The study identified a long-run relationship between the suicide rate and long-term unemployment. On the other hand, an association between suicide and unemployment rate emerged, though statistically weaker. A 1% increase in long-term unemployment increases the suicide rate by 0.83%, with a long-term effect lasting up to 18 years. Public unemployment spending (as percentage of the Italian gross domestic product) may mitigate this association: when its annual growth rate is higher than 0.18%, no impact of unemployment on suicide in detectable. A decrease in the suicide rate is expected for higher amounts of social spending, which may be able to compensate for the reduced level of social integration resulting from unemployment, helping the individual to continue to integrate into society. A corollary of this is that austerity in times of economic recession may exacerbate the impact of the economic downturn on mental health. However, a specific "flexicurity" system (intended as a combination of high employment protection, job satisfaction and labour-market policies) may have a positive impact on health.



Income distribution and health: can polarization explain health outcomes better than inequality?

Abstract

Utilizing data from the China Health and Nutrition Survey (CHNS) from 1991 to 2011, we aim to analyze the effects of income distribution on two risks for chronic diseases: body mass index (BMI) and blood pressure. Unlike the previous studies, we consider two different kinds of indicators of income distribution: inequality and polarization. Different from relative inequality indicators such as the Gini index, which measure income gaps only, the recently developed polarization indicator captures group clustering and social alienation, in addition to income gaps. Our empirical results demonstrate that both BMI and blood pressure are positively correlated with income polarization, while inequality is a weaker predictor of these health outcomes. Thus, polarization, rather than inequality, should be used when analyzing the relationship between health outcomes and income distribution. We also examine the polarization-to-health transmission mechanism using mediation and moderation analytic frameworks. The results suggest that social networks mediate the effect of polarization on BMI and neutralize the effect on blood pressure.



Co-ordination of health care: the case of hospital emergency admissions

Abstract

The recognition that chronic care delivery is suboptimal has led many health authorities around the world to redesign it. In Norway, the Department of Health and Care Services implemented the Coordination Reform in January 2012. One policy instrument was to build emergency bed capacity (EBC) as an integrated part of primary care service provided by municipalities. The explicit aim was to reduce the rate of avoidable admissions to state-owned hospitals. Using five different sources of register data and a quasi-experimental framework—the "difference-in-differences" regression approach—we estimated the association between changes in EBC on changes in aggregate emergency hospital admissions for eight ambulatory care sensitive conditions (ACSC). The results show that EBC is negatively associated with changes in aggregate ACSC emergency admissions. The associations are largely consistent with alternative model specifications. We also estimated the relationship between changes in EBC on changes in each ACSC condition separately. Our results are mixed. EBC is negatively associated with emergency hospital admissions for asthma, angina and chronic obstructive pulmonary disease but not congestive heart failure and diabetes. The main implication of the study is that EBC within primary care is potentially a sensible way of redesigning chronic care.



Impact of early primary care follow-up after discharge on hospital readmissions

Abstract

Reducing repeated hospitalizations of patients with chronic conditions is a policy objective for improving system efficiency. We test the hypothesis that the risk of readmission is associated with the timing and intensity of primary care follow-up after discharge, focusing on patients hospitalized for heart failure in France. We propose a discrete-time model which takes into account that primary care treatments have a lagged and cumulative effect on readmission risk, and an instrumental variable approach, exploiting geographical differences in availability of generalists. We show that the early consultations with a GP after discharge can reduce the 28-day readmission risk by almost 50%, and that patients with higher ambulatory care utilization have smaller odds of readmission. Furthermore, geographical disparities in primary care affect indirectly the readmission risk. These results suggest that interventions which strengthen communication between hospitals and generalists are elemental for reducing readmissions and for developing effective strategies at the hospital level, it is also necessary to consider primary care resources that are available to patients.



The Great Recession, financial strain and self-assessed health in Ireland

Abstract

In this paper, we study the effects of the 2008 economic crisis on general health in one of the most severely affected EU economies—Ireland. We examine the relationship between compositional changes in demographic and socio-economic factors, such as education, income, and financial strain, and changes in the prevalence of poor self-assessed health over a 5-year period (2008–2013). We apply a generalised Oaxaca–Blinder decomposition approach for non-linear regression models proposed by Fairlie (1999, 2005). Results show that the increased financial strain explained the largest part of the increase in poor health in the Irish population and different sub-groups. Changes in the economic activity status and population structure also had a significant positive effect. The expansion of education had a significant negative effect, preventing further increases in poor health. Wealthier and better educated individuals experienced larger relative increases in poor health, which led to reduced socio-economic health inequalities.



An evaluation of the 1987 French Disabled Workers Act: better paying than hiring

Abstract

This paper presents the first evaluation of the French Disabled Workers Act of 1987, which aimed to promote the employment of disabled people in the private sector. We use a panel data set, which includes both the health and the labour market histories of workers. We account both for unobserved heterogeneity and for the change in the disabled population over time. We find that the law had a negative impact on the employment of disabled workers in the private sector. This counterproductive effect likely comes from the possibility to pay a fine instead of hiring disabled workers.



Denosumab versus bisphosphonates for the treatment of bone metastases from solid tumors: a systematic review

Abstract

Background

Bone metastases are highly prevalent in breast, prostate, lung and colon cancers. Their symptoms negatively affect quality of life and functionality and optimal management can mitigate these problems. There are two different targeted agents to treat them: bisphosphonates (pamidronate and zoledronic acid) and the monoclonal antibody denosumab. Estimates of cost-effectiveness are still mixed.

Objective

To conduct a systematic review of economic studies that compares these two options.

Method

Literature search comprised eight databases and keywords for bone metastases, bisphosphonates, denosumab, and economic studies were used. Data were extracted regarding their methodologic characteristics and cost-effectiveness analyses. All studies were evaluated regarding to its methodological quality.

Results

A total of 263 unique studies were retrieved and six met inclusion criteria. All studies were based on clinical trials and other existing literature data, and they had high methodological quality. Most found unfavorable cost-effectiveness for denosumab compared with zoledronic acid, with adjusted ICERS that ranged from $4638–87,354 per SRE avoided and from US$57,274–4.81 M. per QALY gained, which varied widely according to type of tumor, time horizon, among others. Results were sensitive to drug costs, time to first skeletal-related event (SRE), time horizon, and utility.

Conclusions

Denosumab had unfavorable cost-effectiveness compared with zoledronic acid in most of the included studies. New economic studies based on real-world data and longer time horizons comparing these therapeutic options are needed.



Alexandros Sfakianakis
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
6948891480


Public Health
Direct healthcare costs of spinal disorders in Brazil

Abstract

Objectives

To estimate the direct healthcare costs of spinal disorders in Brazil over 2016.

Methods

This is a prevalence-based cost-of-illness study with a top-down approach from the perspective of the public healthcare system. All international Classification of Diseases codes related to spinal disorders were included. The following costs were obtained: (1) hospitalization; medical professional service costs; intensive care unit costs; companion daily stay; (2) outpatient (services/procedures). Data were analyzed descriptively and costs presented in US$.

Results

The healthcare system spent US$ 71.4 million, and inpatient care represented 58%. The number of inpatient days was 250,426, and there were 36,654 hospital admissions (dorsalgia and disk disorders representing 70% of the costs). More than 114,000 magnetic resonance scans and 107,000 computerized tomography scans were adopted. Men had more inpatient days (138,215) than women (112,211). Overall, the inpatient/outpatient cost ratio was twice as high for men.

Conclusions

We demonstrated that the direct costs of spinal disorders in Brazil in 2016 were considerable. We also found a substantial amount of financial resources spent on diagnostic imaging. This is relevant as the routine use of diagnostic imaging for back pain is discouraged in international guidelines.



Roma health: Do we know enough?


Reducing socioeconomic inequalities in life expectancy among municipalities: the Brazilian experience

Abstract

Objectives

This study analyzed the evolution of regional and socioeconomic inequality in life expectancy (LE) at birth and the probability of living up to 40 (LU40) and up to 60 years of age (LU60) in Brazilian municipalities between 1991 and 2010.

Methods

We analyzed data from the last three national census (1991, 2000 and 2010) computed for the 5565 Brazilian municipalities. They were divided into centiles according to the average per capita income. Poisson regression was performed to calculate the ratios between the poorest and the richest centiles.

Results

The average LE (+ 8.8 years), LU40 [6.7 percentage points (pp)] and LU60 increased (12.2 pp) between 1991 and 2010. The ratio of LE between the 1% of richest counties and the 1% of poorest counties decreased from 1.20 in 1991 to 1.09 in 2010. While in the poorest municipalities there was a gain of around 12 years of life, among the richest this increase was around 7 years.

Conclusions

There was a remarkable decrease in regional and socioeconomic inequality in LE, LU40 and LU60 in Brazil between 1991 and 2010.



Clusters of risk behaviors for noncommunicable diseases in the Brazilian adult population

Abstract

Objectives

To identify clusters of risk behaviors among Brazilian adults, by sex, and to associate clusters with sociodemographic factors and self-perception of health.

Methods

We assessed 46,785 adults from the Brazilian National Health Survey. The risk behaviors were low consumption of fruits and vegetables—LFV (< 5 times/week), physical inactivity—PI (< 150 min/week), smoking (yes/no) and excessive consumption of alcohol—EA (5 doses for male, 4 doses for female). We used Venn diagram, cluster analysis and multinomial regression models.

Results

We found 9 clusters. The cluster of four risk behaviors was more common in males (3.2% vs. 0.83%). Despite a greater potential for aggregation of behaviors in females (O/E = 2.48) than in males (O/E = 1.62), the women were less likely to have all risk behaviors jointly (OR 0.24, 95% CI 0.19; 0.31), and this was found for the other clusters. In general, Brazilian black/brown, younger, with low education level and who had a self-perception of bad health, were more likely to engage in clusters of risk behaviors.

Conclusions

The prevalence of Brazilian adults engaging in clusters of risk behaviors is high, mainly among males, those who reported a bad health and with low socioeconomic status.



Palliative care in universal health coverage: What about humanitarian emergency assistance?


Migrant mental health, Hickam's dictum, and the dangers of oversimplification


Road map towards a harmonized pan-European surveillance of obesity-related lifestyle behaviours and their determinants in children and adolescents

Abstract

Objectives

To develop a road map towards a harmonized pan-European surveillance system for children and adolescents.

Methods

Representatives of five European surveillance systems and the German Health Interview and Examination Survey for Children and Adolescents contributed to the road map through a structured workshop in 2016.

Results

A conceptual framework for this road map was developed with seven action points (APs) guiding the successive cross-country harmonization. First, key indicators of health behaviour and their determinants in children and adolescents will be identified (AP1, 2); short screening instruments will be developed and implemented to assess and monitor key indicators (AP3, 4). In parallel, optional supplementary modules could be implemented to provide objective data (AP5). This would allow mutual calibration and improvement of existing instruments before their progressive replacement by more comparable measurement tools (AP6). The establishment of a competence platform is envisaged for guiding the harmonization process (AP7).

Conclusions

This approach builds on existing systems, provides comparable key health indicators across European regions, helps to assess temporal trends and—once in place—will facilitate health reporting and monitoring of national and international health targets.



Association of objectively measured and perceived environment with accelerometer-based physical activity and cycling: a Swiss population-based cross-sectional study of children

Abstract

Objectives

We tested whether objectively assessed neighbourhood characteristics are associated with moderate-to vigorous physical activity (MVPA) and cycling in Swiss children and adolescents and assessed the mediating role of the perception of the environment.

Methods

The cross-sectional analyses were based on data of 1306 participants aged 6–16 years of the population-based SOPHYA study. MVPA was measured by accelerometry, time spent cycling and the perceived environment by questionnaire. Objective environmental parameters at the residential address were GIS derived. In all analyses, personal, social and environmental factors were considered.

Results

MVPA showed significant positive associations with perceived personal safety and perceived access to green spaces but not with respective objective parameters. Objectively assessed main street density and shorter distance to the next public transport were associated with less cycling in adolescents. Parents' perceptions did not mediate the observed associations of the objectively assessed environment with MVPA and cycling.

Conclusions

Associations between the environment and physical activity differ by domain. In spatial planning efforts to improve objective environments should be complemented with efforts to increase parental sense of security.



Obesity risk in women of childbearing age in New Zealand: a nationally representative cross-sectional study

Abstract

Objectives

To investigate risk factors for women with obesity of childbearing age.

Methods

A cross-sectional survey of New Zealand women (15–49 years) with measured height and weight was used [unweighted (n = 3625) and weighted analytical sample (n = 1,098,372)] alongside sociodemographic-, behavioural- and environmental-level predictors. Multilevel logistic regression weighted for non-response of height and weight data was used.

Results

Meeting physical activity guidelines (AOR (adjusted odds ratio) 0.66, 95% CI 0.54–0.80), Asian (AOR 0.15, 95% CI 0.10–0.23) and European/other ethnicity (AOR 0.46, 95% CI 0.36–0.58) and an increased availability of public greenspace (Q4 AOR 0.55, 95% CI 0.41–0.75) were related to decreased obesity risk. Older age (45–49 years AOR 3.01, 95% CI 2.17–4.16), Pacific ethnicity (AOR 2.81, 95% CI 1.87–4.22), residing in deprived areas (AOR 1.65, 95% CI 1.16–2.35) or secondary urban areas (AOR 1.49, 95% CI 1.03–2.18) were related to increased obesity risk. When examined by rural/urban classification, private greenspace was only related to increased obesity risk in main urban areas.

Conclusions

This study highlights factors including but not limited to public greenspace, which inform obesity interventions for women of childbearing age in New Zealand.



Source-country individualism, cultural shock, and depression among immigrants

Abstract

Objectives

To determine whether there is a relationship between source-country individualism and depression among different immigrant groups.

Methods

Pooled data from the 2009–2014 waves of the Canadian Community Health Survey (CCHS) were used. The CCHS is a cross-sectional, nationally representative household survey. A sample of 4347 immigrants in Canada were studied, representing 101 source countries.

Results

Multi-level logistic regression analysis showed a curvilinear relationship between source-country individualism and depression. A positive relationship was found among immigrants from countries with mid- to high levels of individualism. However, an inverse relationship was observed among immigrants from countries with low to mid-levels of individualism. Depression was significantly associated with the linear form of the source-country individualism measure [odds ratio (OR) 0.950; 95% confidence interval (CI) 0.915–0.987] and its squared term (OR 1.063; 95% CI 1.026–1.102).

Conclusions

A high level of source-country individualism tends to increase the prevalence of depression among immigrants. There is also a cultural shock effect: the prevalence of depression was stronger in the initial years after immigration for those who migrated from countries with low levels of individualism.



Alexandros Sfakianakis
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
6948891480


Pediatric Critical Care Medicine
Antibiotic Prophylaxis for Open Chest Management After Pediatric Cardiac Surgery
Objectives: Although open chest management optimizes hemodynamics after cardiac surgery, it increases postoperative infections and leads to increased mortality. Despite the importance of antibiotic prophylaxis during open chest management, no specific recommendations exist. We aimed to compare the occurrence rates of bloodstream infection and surgical site infection between the different prophylactic antibiotic regimens for open chest management after pediatric cardiac surgery. Design: Retrospective, single-center, observational study. Setting: PICU at a tertiary children's hospital. Patients: Consecutive patients less than or equal to 18 years old with open chest management after cardiac surgery followed by delayed sternal closure, between January 2012 and June 2018. Interventions: None. Measurements and Main Results: We compared the composite occurrence rate of postoperative bloodstream infection and surgical site infection within 30 days after cardiac surgery between three prophylactic antibiotic regimens: 1) cefazolin, 2) cefazolin + vancomycin, and 3) vancomycin + meropenem. In 63 pediatric cardiac surgeries with open chest management, 17 bloodstream infections, and 12 surgical site infections were identified postoperatively. The composite occurrence rates of bloodstream infection and surgical site infection were 10 of 15 (67%), 10 of 19 (53%), and nine of 29 (31%) in the cefazolin, cefazolin + vancomycin, and vancomycin + meropenem regimens, respectively (p = 0.07). After adjusting for age, open chest management duration, extracorporeal membrane oxygenation use, and nasal methicillin-resistant Staphylococcus aureus colonization in multivariable analysis, there was no significant difference between the cefazolin and the cefazolin + vancomycin regimens (p = 0.19), while the vancomycin + meropenem regimen had a lower occurrence rate of bloodstream infection and surgical site infection than the cefazolin regimen (odds ratio, 0.0885; 95% CI, 0.0176–0.446; p = 0.003). Conclusions: In this study, a lower occurrence rate of postoperative bloodstream infection and surgical site infection was observed among patients with broad-spectrum antibiotic regimen after pediatric cardiac surgery with open chest management. Further studies, ideally randomized controlled studies investigating the efficacy of broad-spectrum antibiotics and their complications, are warranted before routine implementation of broad-spectrum prophylactic antibiotic regimen. The authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: hatachi@wch.opho.jp ©2019The Society of Critical Care Medicine and the World Federation of Pediatric Intensive and Critical Care Societies

Noninvasive Determination of Blood Pressure by Heart Sound Analysis Compared With Intra-Arterial Monitoring in Critically Ill Children—A Pilot Study of a Novel Approach
Objectives: To develop a novel device to predict systolic and diastolic blood pressure based on measured heart sound signals and evaluate its accuracy in comparison to intra-arterial blood pressure readings. Study Design: Prospective, observational pilot study. Setting: PICU. Patients: Critically ill children (0–18 yr) undergoing continuous blood pressure monitoring via radial artery intra-arterial catheters were enrolled in the study after informed consent. The study included medical, cardiac, and surgical PICU patients. Interventions: Along with intra-arterial blood pressure, patient's heart sounds were recorded simultaneously by a highly sensitive sensor taped to the chest. Additional hardware included a data acquisition unit and laptop computer. Subsequently, advanced signal processing technologies were used to minimize random interfering signals and extract and separate S1 and S2 signals. A computerized model was then developed using artificial neural network systems to estimate blood pressure from the extracted heart sound analysis. Measurements and Main Outcomes: We found a statistically significant correlation for systolic (r = 0.964; R2 = 0.928) and diastolic (r = 0.935; R2 = 0.868) blood pressure readings (n = 491) estimated by the novel heart-sound signal–based method and those recorded by intra-arterial catheters. The mean difference of the individually paired determinations of the blood pressure between the heart-sound–based method and intra-arterial catheters was 0.6 ± 7 mm Hg for systolic blood pressure and –0.06 ± 5 mm Hg for diastolic blood pressure, which was within the recommended range of 5 ± 8 mm Hg for any new blood pressure devices. Conclusions: Our findings provide proof of concept that the heart-sound signal-based method can provide accurate, noninvasive blood pressure monitoring. Drs. Kapur and Chen contributed equally to the article. This work was supported by the 21st Century Jobs Trust Fund received through the Michigan Strategic Fund from the State of Michigan and administered by the Michigan Economic Development Corporation (www.michiganbusiness.org). Dr. Kapur's institution received funding from Alexion; a nonprovisional patent (number PCT/US18/17178) titled, "Method and Apparatus for Determining Blood Pressure on Measured Heart Sounds," based on this research was submitted on 02/07/2018 (to Drs. Kapur, Chen, Xu, and Wu listed as co-inventors); and he disclosed off-label product use of the technology reported in the article, which is currently investigational and not approved by the U.S. Food and Drug Administration for any purposes. Drs. Kapur and Chen disclosed that this work was supported by the 21st Century Jobs Trust Fund received through the Michigan Strategic Fund from the State of Michigan and administered by the Michigan Economic Development Corporation. Dr. Xu disclosed work for hire. The remaining authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: gkapur@med.wayne.edu ©2019The Society of Critical Care Medicine and the World Federation of Pediatric Intensive and Critical Care Societies

Relationship Between Diaphragmatic Electrical Activity and Esophageal Pressure Monitoring in Children
Objectives: Mechanical ventilation is an essential life support technology, but it is associated with side effects in case of over or under-assistance. The monitoring of respiratory effort may facilitate titration of the support. The gold standard for respiratory effort measurement is based on esophageal pressure monitoring, a technology not commonly available at bedside. Diaphragmatic electrical activity can be routinely monitored in clinical practice and reflects the output of the respiratory centers. We hypothesized that diaphragmatic electrical activity changes accurately reflect changes in mechanical efforts. The objectives of this study were to characterize the relationship between diaphragmatic electrical activity and esophageal pressure. Design: Prospective crossover study. Setting: Esophageal pressure and diaphragmatic electrical activity were simultaneously recorded using a specific nasogastric tube in three conditions: in pressure support ventilation and in neurally adjusted ventilatory support in a random order, and then after extubation. Patients: Children in the weaning phase of mechanical ventilation. Interventions: The maximal swing in esophageal pressure and esophageal pressure-time product, maximum diaphragmatic electrical activity, and inspiratory diaphragmatic electrical activity integral were calculated from 100 consecutive breaths. Neuroventilatory efficiency was estimated using the ratio of tidal volume/maximum diaphragmatic electrical activity. Measurements and Main Results: Sixteen patients, with a median age of 4 months (interquartile range, 0.5–13 mo), and weight 5.8 kg (interquartile range, 4.1–8 kg) were included. A strong linear correlation between maximum diaphragmatic electrical activity and maximal swing in esophageal pressure (r2 > 0.95), and inspiratory diaphragmatic electrical activity integral and esophageal pressure-time product (r2 > 0.71) was observed in all ventilatory conditions. This correlation was not modified by the type of ventilatory support. Conclusions: On a short-term basis, diaphragmatic electrical activity changes are strongly correlated with esophageal pressure changes. In clinical practice, diaphragmatic electrical activity monitoring may help to inform on changes in respiratory efforts. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website (http://journals.lww.com/pccmjournal). Dr. Baudin received funding from Maquet Critical Care (speaking fees and nonfinancial support). Dr. Beck received funding from Maquet Critical Care (she and her husband have made inventions related to neural control of mechanical ventilation that are patented. The patents are assigned to the academic institution[s] where inventions were made. The license for these patents belongs to Maquet Critical Care. Future commercial uses of this technology may provide financial benefit to them through royalties) and Neurovent Research Inc (NVR) (she and her husband own 50% of NVR, which is a research and development company that builds the equipment and catheters for research studies. NVR has a consulting agreement with Maquet Critical Care). Dr. Jouvet's institution received funding from Air Liquide Santé (grant and lecture), and he received salary and grant funding from the Ministry of Health of Quebec, Sainte Justine Hospital, and Public Research Agency of Quebec. Dr. Emeriaud's institution received funding from Fonds de Recherche du Québec—Santé and Maquet Critical Care (currently supporting a feasibility study in neonatal ventilation which Dr. Emeriaud is leading). The remaining authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: sandrine.essouri.hsj@ssss.gouv.qc.ca ©2019The Society of Critical Care Medicine and the World Federation of Pediatric Intensive and Critical Care Societies

International Study of the Epidemiology of Platelet Transfusions in Critically Ill Children With an Underlying Oncologic Diagnosis
Objectives: To describe the epidemiology of platelet transfusions in critically ill children with an underlying oncologic diagnosis and to examine effects of prophylactic versus therapeutic transfusions. Design: Subgroup analysis of a prospective, observational study. Setting: Eighty-two PICUs in 16 countries. Patients: All children (3 d to 16 yr old) who received a platelet transfusion during one of the six predefined screening weeks and had received chemotherapy in the previous 6 months or had undergone hematopoietic stem cell transplantation in the last year. Interventions: None. Measurements and Main Results: Of the 548 patients enrolled in the parent study, 237 (43%) had an underlying oncologic diagnosis. In this population, 71% (168/237) of transfusions were given prophylactically, and 59% (139/237) of transfusions were given at a total platelet count greater than 20 × 109/L, higher than the current recommendations. Those with an underlying oncologic diagnosis were significantly older, and received less support including less mechanical ventilation, fewer medications that affect platelet function, and less use of extracorporeal life support than those without an underlying oncologic diagnosis. In this subpopulation, there were no statistically significant differences in median (interquartile range) platelet transfusion thresholds when comparing bleeding or nonbleeding patients (50 × 109/L [10–50 × 109/L] and 30 × 109/L [10–50 × 109/L], respectively [p = 0.166]). The median (interquartile range) interval transfusion increment in children with an underlying oncologic diagnosis was 17 × 109/L (6–52 × 109/L). The presence of an underlying oncologic diagnosis was associated with a poor platelet increment response to platelet transfusion in this cohort (adjusted odds ratio, 0.46; 95% CI, 0.22–0.95; p = 0.035). Conclusions: Children with an underlying oncologic diagnosis receive nearly half of platelet transfusions prescribed by pediatric intensivists. Over half of these transfusions are prescribed at total platelet count greater than current recommendations. Studies must be done to clarify appropriate indications for platelet transfusions in this vulnerable population. The Point Prevalence Study of Platelet Transfusions in Critically Ill Children (P3T) Investigators are listed in the Acknowledgments. Dr. Cushing received funding from Cerus Corporation, Octapharma, and Instrumentation Laboratory. Dr. Steiner's institution received funding from the National Institutes of Health and Boeringer-Ingelheim, and she received funding from Cerus (travel for study design consultation regarding pathogen-inactivated red cells). The remaining authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: man9026@med.cornell.edu ©2019The Society of Critical Care Medicine and the World Federation of Pediatric Intensive and Critical Care Societies

Parent Medical Traumatic Stress and Associated Family Outcomes After Pediatric Critical Illness: A Systematic Review
Objectives: To critically review, analyze, and synthesize the literature on parent medical traumatic stress from a child's critical illness requiring PICU admission and its association with outcomes of parent mental and physical health, and family functioning. Data Sources: Systematic literature search of Pubmed, Embase, CINAHL, and PsychInfo. Study Selection: Two reviewers identified peer-reviewed published articles with the following criteria: 1) published between January 1, 1980, and August 1, 2018; 2) published in English; 3) study population of parents of children with a PICU admission; and 4) quantitative studies examining factors associated with outcomes of parent mental health, parent physical health, or family functioning. Data Extraction: Literature search yielded 2,476 articles, of which 23 studies met inclusion criteria. Study data extracted included study characteristics, descriptive statistics of parent outcomes after critical illness, and variables associated with parent and family outcomes. Data Synthesis: Studies examined numerous variables associated with parent and family outcomes and used multiple survey measures. These variables were categorized according to their phase in the Integrative Trajectory Model of Pediatric Medical Traumatic Stress, which included peri-trauma, acute medical care, and ongoing care or discharge from care. The majority of objective elements of a child's illness, such as severity of illness and length of hospitalization, did not have a clear relationship with parent and family outcomes. However, familial preexisting factors, a parent's subjective experience in the PICU, and family life stressors after discharge were often associated with parent and family outcomes. Conclusions: This systematic literature review suggests that parent and family outcomes after pediatric critical illness are impacted by familial preexisting factors, a parent's subjective experience in the PICU, and family life stressors after discharge. Developing parent interventions focused on modifying the parent's subjective experience in the PICU could be an effective approach to improve parent outcomes. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website (http://journals.lww.com/pccmjournal). This work was supported by the Department of Pediatrics at Children's Hospital of Michigan and the Department of Pediatrics at the University of Michigan. The authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: lyagiela@dmc.org ©2019The Society of Critical Care Medicine and the World Federation of Pediatric Intensive and Critical Care Societies

Severe Sepsis in Pediatric Liver Transplant Patients: The Emergence of Multidrug-Resistant Organisms
Objectives: To describe characteristics of liver transplant patients with severe sepsis in the PICU. Design: Retrospective descriptive analysis. Setting: Tertiary children's hospital PICU. Patients: Liver transplant recipients admitted January 2010 to July 2016 for pediatric severe sepsis. Interventions: None. Measurements and Main Results: Between January 2010 and July 2016, 173 liver transplants were performed, and 36 of these patients (21%) were admitted with severe sepsis (54 episodes total). Median age at admission was 2 years (1–6.5 yr), 47.2% were male. Bacterial infections were the most common (77.8%), followed by culture negative (12.9%) and viral infections (7.4%). Fungal infections accounted for only 1.9%. Median time from transplant for viral and culture negative infections was 18 days (8.25–39.75 d) and 25 days (9–41 d), whereas 54.5 days (17–131.25 d) for bacterial infections. Bloodstream and intra-abdominal were the most common bacterial sites (45% and 22.5%, respectively). Multidrug-resistant organisms accounted for 47.6% of bacterial sepsis. Vancomycin-resistant Enterococcus and extended-spectrum beta-lactamase producers were the most frequently identified multidrug-resistant organisms. Patients with multidrug-resistant organism sepsis demonstrated higher admission Pediatric Logistic Organ Dysfunction scores (p = 0.043) and were noted to have an odds ratio of 3.8 and 3.6 for mechanical ventilation and multiple organ dysfunction syndrome, respectively (p = 0.047 and p = 0.044). Overall mortality was 5.5% (n = 2 patients), with both deaths occurring in multidrug-resistant organism episodes. Conclusions: We report that multidrug-resistant organisms are increasingly being identified as causative pathogens for sepsis in pediatric liver transplant recipients and are associated with significantly higher odds for mechanical ventilation and higher organ failure. The emergence of multidrug-resistant organism infections in pediatric liver transplant patients has implications for patient outcomes, antibiotic stewardship, and infection prevention strategies. Supported, in part, by grant from National Institutes of Health (NIH) T32-HD40686 (to Dr. Alcamo) and NIH R01-GM108618 (to Dr. Carcillo). Dr. Alcamo's institution received funding from National Institutes of Health (NIH) T32 HD040686. Drs. Alcamo, Carcillo, and Aneja received support for article research from the NIH. Dr. Carcillo's institution received funding from the NIH/National Institute of General Medical Sciences. Dr. Michaels' institution received funding from Pfizer (unrelated study grant), and she received funding as an American Society of Transplantation board member (travel and room for meetings, no honoraria) and from National Institute of Allergy and Infectious Diseases (honoraria and travel and room for Data and Safety Monitoring Board meetings). Dr. Aneja received royalties from UpToDate. The remaining authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: Alicia.Alcamo@chp.edu ©2019The Society of Critical Care Medicine and the World Federation of Pediatric Intensive and Critical Care Societies

State of the Unit: Physician Gender Diversity in Pediatric Critical Care Medicine Leadership
Gender disparities in leadership are receiving increased attention throughout medicine and medical subspecialties. Little is known about the disparities in Pediatric Critical Care Medicine. In this piece, we explore gender disparities in Pediatric Critical Care Medicine physician leadership. We examine physician leadership in the Accreditation Council for Graduate Medical Education fellowship programs, as well as a limited sample of major Pediatric Critical Care Medicine textbooks and societies. Overall, the gender composition of division directors is not significantly different from that of workforce composition, although regional differences exist. More women than men lead fellowship programs, at a higher ratio compared with workforce composition. However, greater gender disparities are present in editorial leadership in this limited analysis. We conclude by recommending potential paths forward for further study and intervention, such as tracking gender diversity and being cognizant of the unique challenges that women currently experience in professional advancement. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website (http://journals.lww.com/pccmjournal). Dr. Riley receives support from the Institute for Healthcare Improvement to develop measurement framework and measures for the 100 Million Healthier Lives Initiative. Dr. Stalets received funding from Fisher & Paykel (hotel and flight accommodations to attend a conference). The remaining authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: Andrea.Maxwell@cchmc.org©2019The Society of Critical Care Medicine and the World Federation of Pediatric Intensive and Critical Care Societies

Association of Organ Dysfunction Scores and Functional Outcomes Following Pediatric Critical Illness
Objectives: Short-term and long-term morbidity and mortality are common following pediatric critical illness. Severe organ dysfunction is associated with significant in-hospital mortality in critically ill children; however, the performance of pediatric organ dysfunction scores as predictors of functional outcomes after critical illness has not been previously assessed. Design: Secondary analysis of a prospective observational cohort. Setting: A multidisciplinary, tertiary, academic PICU. Patients: Patients less than or equal to 18 years old admitted between June 2012 and August 2012. Interventions: None. Measurements and Main Results: The maximum pediatric Sequential Organ Failure Assessment and Pediatric Logistic Organ Dysfunction-2 scores during admission were calculated. The Functional Status Scale score was obtained at baseline, 6 months and 3 years following discharge. New morbidity was defined as a change in Functional Status Scale greater than or equal to 3 points from baseline. The performance of organ dysfunction scores at discriminating new morbidity or mortality at 6 months and 3 years was measured using the area under the curve. Seventy-three patients met inclusion criteria. Fourteen percent had new morbidity or mortality at 6 months and 23% at 3 years. The performance of the maximum pediatric Sequential Organ Failure Assessment and Pediatric Logistic Organ Dysfunction-2 scores at discriminating new morbidity or mortality was excellent at 6 months (areas under the curves 0.9 and 0.88, respectively) and good at 3 years (0.82 and 0.79, respectively). Conclusions: Severity of organ dysfunction is associated with longitudinal change in functional status and short-term and long-term development of new morbidity and mortality. Maximum pediatric Sequential Organ Failure Assessment and Pediatric Logistic Organ Dysfunction-2 scores during critical illness have good to excellent performance at predicting new morbidity or mortality up to 3 years after critical illness. Use of these pediatric organ dysfunction scores may be helpful for prognostication of longitudinal functional outcomes in critically ill children. All authors conceptualized, designed, analyzed, drafted the article for important intellectual content, and collected the data. The authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: travis.matics@advocatehealth.com ©2019The Society of Critical Care Medicine and the World Federation of Pediatric Intensive and Critical Care Societies

The Inadequate Oxygen Delivery Index and Low Cardiac Output Syndrome Score As predictors of Adverse Events Associated With Low Cardiac Output Syndrome Early After Cardiac Bypass
Objectives: To evaluate the effectiveness of two scoring systems, the inadequate oxygen delivery index, a risk analytics algorithm (Etiometry, Boston, MA) and the Low Cardiac Output Syndrome Score, in predicting adverse events recognized as indicative of low cardiac output syndrome within 72 hours of surgery. Design: A retrospective observational pair-matched study. Setting: Tertiary pediatric cardiac ICU. Patients: Children undergoing cardiac bypass for congenital heart defects. Cases experienced an adverse event linked to low cardiac output syndrome in the 72 hours following surgery (extracorporeal membrane oxygenation, renal replacement therapy, cardiopulmonary resuscitation, and necrotizing enterocolitis) and were matched with a control patient on criteria of procedure, diagnosis, and age who experienced no such event. Interventions: None. Measurements and Main Results: Of a total 536 bypass operations in the study period, 38 patients experienced one of the defined events. Twenty-eight cases were included in the study after removing patients who suffered an event after 72 hours or who had insufficient data. Clinical and laboratory data were collected to derive scores for the first 12 hours after surgery. The inadequate oxygen delivery index was calculated by Etiometry using vital signs and laboratory data. A modified Low Cardiac Output Syndrome Score was calculated from clinical and therapeutic markers. The mean inadequate oxygen delivery and modified Low Cardiac Output Syndrome Score were compared within each matched pair using the Wilcoxon signed-rank test. Inadequate oxygen delivery correctly differentiated adverse events in 13 of 28 matched pairs, with no evidence of inadequate oxygen delivery being higher in cases (p = 0.71). Modified Low Cardiac Output Syndrome Score correctly differentiated adverse events in 23 of 28 matched pairs, with strong evidence of a raised score in low cardiac output syndrome cases (p < 0.01). Conclusions: Although inadequate oxygen delivery is an Food and Drug Administration approved indicator of risk for low mixed venous oxygen saturation, early postoperative average values were not linked with medium-term adverse events. The indicators included in the modified Low Cardiac Output Syndrome Score had a much stronger association with the specified adverse events. This work was undertaken at Great Ormond Street Hospital/UCL Institute of Child Health, which received a proportion of funding from the Department of Health's National Institute of Health Research Biomedical Research Centre's funding scheme. Drs. Ray and Peters' institutions received funding from Great Ormond Street Hospital Children's Charity (GOSHCC). Dr. Peters received funding from Faron pharmaceuticals (advisory board) and Therakind. Drs. Peters and Brown received support for article research from GOSHCC. Dr. Brown received other support from GOSHCC PICU infrastructure grant supporting Libby Rogers. The remaining authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: samiran.ray@gosh.nhs.uk ©2019The Society of Critical Care Medicine and the World Federation of Pediatric Intensive and Critical Care Societies

Decision-Making About Intracranial Pressure Monitor Placement in Children With Traumatic Brain Injury
Objectives: Little is known about how clinicians make the complex decision regarding whether to place an intracranial pressure monitor in children with traumatic brain injury. The objective of this study was to identify the decisional needs of multidisciplinary clinician stakeholders. Design: Semi-structured qualitative interviews with clinicians who regularly care for children with traumatic brain injury. Setting: One U.S. level I pediatric trauma center. Subjects: Twenty-eight clinicians including 17 ICU nurses, advanced practice providers, and physicians and 11 pediatric surgeons and neurosurgeons interviewed between August 2017 and February 2018. Interventions: None. Measurements and Main Results: Participants had a mean age of 43 years (range, 30–66 yr), mean experience of 10 years (range, 0–30 yr), were 46% female (13/28), and 96% white (27/28). A novel conceptual model emerged that related the difficulty of the decision about intracranial pressure monitor placement (y-axis) with the estimated outcome of the patient (x-axis). This model had a bimodal shape, with the most difficult decisions occurring for patients who 1) had a good opportunity for recovery but whose neurologic examination had not yet normalized or 2) had a low but uncertain likelihood of neurologically functional recovery. Emergent themes included gaps in medical knowledge and information available for decision-making, differences in perspective between clinical specialties, and ethical implications of decision-making about intracranial pressure monitoring. Experienced clinicians described less difficulty with decision-making overall. Conclusions: Children with severe traumatic brain injury near perceived transition points along a spectrum of potential for recovery present challenges for decision-making about intracranial pressure monitor placement. Clinician experience and specialty discipline further influence decision-making. These findings will contribute to the design of a multidisciplinary clinical decision support tool for intracranial pressure monitor placement in children with traumatic brain injury. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website (http://journals.lww.com/pccmjournal). Dr. Bennett's institution also received funding from the National Institutes of Health (NIH) Eunice Kennedy Shriver National Institute of Child Health and Human Development and NIH/National Center for Advancing Translational Science. Drs. Bennett's and Rutebemberwa's institutions received funding from Mindsource Brain Injury Network of the Colorado Department of Human Services. Ms. Marsh's and Dr. Maertens's institutions received funding from the Colorado Department of Human Services. Dr. Hankinson's institution received funding from Colorado Traumatic Brain Injury Trust Fund. The remaining authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: tell.bennett@ucdenver.edu ©2019The Society of Critical Care Medicine and the World Federation of Pediatric Intensive and Critical Care Societies

Alexandros Sfakianakis
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
6948891480


Biological Theory
Toward a Macroevolutionary Theory of Human Evolution: The Social Protocell

Abstract

Despite remarkable empirical and methodological advances, our theoretical understanding of the evolutionary processes that made us human remains fragmented and contentious. Here, we make the radical proposition that the cultural communities within which Homo emerged may be understood as a novel exotic form of organism. The argument begins from a deep congruence between robust features of Pan community life cycles and protocell models of the origins of life. We argue that if a cultural tradition, meeting certain requirements, arises in the context of such a "social protocell," the outcome will be an evolutionary transition in individuality whereby traditions and hominins coalesce into a macroscopic bio-socio-technical system, with an organismal organization that is culturally inherited through irreversible fission events on the community level. We refer to the resulting hypothetical evolutionary individual as a "sociont." The social protocell provides a preadapted source of alignment of fitness interests that addresses a number of open questions about the origins of shared adaptive cultural organization, and the derived genetic (and highly unusual) adaptations that support them. Also, social cooperation between hominins is no longer in exclusive focus since cooperation among traditions becomes salient in this model. This provides novel avenues for explanation. We go on to hypothesize that the fate of the hominin in such a setting would be mutualistic coadaptation into a part-whole relation with the sociont, and we propose that the unusual suite of derived features in Homo is consistent with this hypothesis.



Mating Markets: A Naturally Selected Sex Allocation Theory of Sexual Selection

Abstract

This article utilizes three premises. (1) There are commonly ecologically oriented, naturally selected specialized differences in frequency and/or quality as well as sexually selected differences between the sexes. (2) Sex in the sense of coming together and going apart (syngamy and meiosis in haploids) or going apart and coming together (meiosis and syngamy in diploids) is trade in these naturally selected differences, i.e., there is a mating market in sexual species. (3) While such trade is beneficial to the population as a whole, sexual competition and selection is conflict over the profits of that trade and can be detrimental to the population as a whole. These premises yield a naturally selected sex allocation theory of the possible directions and forms of sexual selection, i.e., one that includes costs as well as frequencies. This can explain conventional sex roles, the sex-role reversed, inter- as well as intrasexual selection, and passive as well as active choice. Any of these alternatives may be over mates, over gametes, or both. A hypothetical example based on density dependence relative to resources is provided, one that suggests that centrioles may be a cytoplasmic resource in males in multicellular animals, and which are the target of active choice and the mechanism of manipulation in passive female choice. As a whole, the approach yields a truly general theory of sexual selection, provides an alternative to the mechanism for Fisher's principle, and gives a theoretical explanation for Mayr's biological species definition.



Social-Ecological Theory of Maximization: Basic Concepts and Two Initial Models

Abstract

Efforts have been dedicated to the understanding of social-ecological systems, an important focus in ethnobiological studies. In particular, ethnobiological investigations have found evidence and tested hypotheses over the last 30 years on the interactions between human groups and their environments, generating the need to formulate a theory for such systems. In this article, we propose the social-ecological theory of maximization to explain the construction and functioning of these systems over time, encompassing hypotheses and evidence from previous ethnobiological studies. In proposing the theory, we present definitions and two conceptual models, an environmental maximization model and a redundancy generation model. The first model seeks to address biota selection and its use by human populations. The second emphasizes how the system organizes itself from the elements that were incorporated into it. Furthermore, we provide the theoretical scenario of plant selection and use from an evolutionary perspective, which explicitly integrates the phylogenetic relationships of plants (or other living resources) and human beings.



Why is There No Successful Whole Brain Simulation (Yet)?

Abstract

With the advent of powerful parallel computers, efforts have commenced to simulate complete mammalian brains. However, so far none of these efforts has produced outcomes close to explaining even the behavioral complexities of animals. In this article, we suggest four challenges that ground this shortcoming. First, we discuss the connection between hypothesis testing and simulations. Typically, efforts to simulate complete mammalian brains lack a clear hypothesis. Second, we treat complications related to a lack of parameter constraints for large-scale simulations. To demonstrate the severity of this issue, we review work on two small-scale neural systems, the crustacean stomatogastric ganglion and the Caenorhabditis elegans nervous system. Both of these small nervous systems are very thoroughly, but not completely understood, mainly due to issues with variable and plastic parameters. Third, we discuss the hierarchical structure of neural systems as a principled obstacle to whole-brain simulations. Different organizational levels imply qualitative differences not only in structure, but in choice and appropriateness of investigative technique and perspective. The challenge of reconciling different levels also undergirds the challenge of simulating and hypothesis testing, as modeling a system is not the same thing as simulating it. Fourth, we point out that animal brains are information processing systems tailored very specifically for the ecological niches the respective animals live in.



The Role of Assessor Teaching in Human Culture

Abstract

According to the dual inheritance theory, cultural learning in our species is a biased and highly efficient process of transmitting cultural traits. Here we define a model of cultural learning where social learning is integrated as a complementary element that facilitates the discovery of a specific behavior by an apprentice, and not as a mechanism that works in opposition to individual learning. In that context, we propose that the emergence of the ability to approve or disapprove of offspring behavior, orienting their learning (a process we call assessor teaching), transformed primate social learning into a cultural transmission system, like that which characterizes our species. Assessor teaching facilitates the replication and/or reconstruction of behaviors that are difficult to imitate and helps to determine which behaviors should be imitated. We also explore the form in which assessor teaching has conditioned the evolution of our abilities to develop cultures in the hominin line, converting us into individuals equipped with what we call a suadens psychology. Our main point is to defend the hypothesis that suadens psychology determines the stability and dynamics that affect the trajectories of many cultural characters. We compare our proposal with other theories about cultural evolution, specifically with dual inheritance theory and cultural attraction theory.



Multicellular Individuality: The Case of Bacteria

Abstract

Recent attention to complex group-level behavior amongst bacteria has led some to conceive of multicellular clusters of bacteria as individuals. In this article, I assess these recent claims by first drawing a distinction between two concepts of individuality: physiological and evolutionary. I then survey cases that are representative of three different modes of growth: myxobacteria (surface-attached agglomerative growth), Bacillus subtilis(agglomerative growth not attached to a surface), and cyanobacteria (filamentous growth). A closer look at these cases indicates that multicellular individuality among bacteria is remarkably complex. Physiologically, the three cases of multicellular clusters do not form physiological individuals. But matters are different when it comes to evolutionary individuality; although multicellular clusters that grow by agglomeration are not highly individuated, filament-forming groups achieve a relatively high degree of individuality. I also suggest that debates about bacterial multicellular individuality may have been obscured by a failure to see that selection on highly individuated groups is by no means the only mechanism to bring about the complex group-level behaviors that have led some to view bacteria as multicellular individuals.



Are Verbal-Narrative Models More Suitable than Mathematical Models as Information Processing Devices for Some Behavioral (Biosemiotic) Problems?

Abstract

This article argues that many, if not most, behavior descriptions and sequencing are in essence an interpretation of signs, and are evaluated as sequences of signs by researchers. Thus, narrative analysis, as developed by Barthes and others, seems best suited to be used in behavioral/biosemiotic studies rather than mathematical modeling, and is very similar to some classic ethology methods. As our brain interprets behaviors as signs and attributes meaning to them, narrative analysis seems more suitable than mathematical modeling to describe and study behavior. Mathematical models are, on many occasions, extremely reductionist and simplifying because of computational and/or numerical representation limitations that lead to errors and straitjacketing interpretations of reality. Since actual animals (and our analysis of their behavior) are not as optimal in real life as our mathematical models, here it is proposed that we should consider logical/verbal models and semantic interpretations as equally or even better suited for behavioral analysis, and refrain from enforcing mathematical modeling as the only (right) way to study and understand biological problems, especially those of a behavioral and biosemiotic nature.



Criteria for Holobionts from Community Genetics

Abstract

We address the controversy in the literature concerning the definition of holobionts and the apparent constraints on their evolution using concepts from community population genetics. The genetics of holobionts, consisting of a host and diverse microbial symbionts, has been neglected in many discussions of the topic, and, where it has been discussed, a gene-centric, species-centric view, based in genomic conflict, has been predominant. Because coevolution takes place between traits or genes in two or more species and not, strictly speaking, between species, it may affect some traits but not others in either host or symbiont. Moreover, when interacting species pairs are embedded in a larger community, indirect ecological effects can alter the expected pairwise dynamics. Mode of symbiont transmission and the degree of host inbreeding both affect the extent of microbial mixing across host lineages and thereby the degree to which selection on one trait of either partner affects other aspects of a holobiont phenotype. We discuss several potential defining criteria for holobionts using community genetics and population genetics models, suggesting their application and limitations. Using community genetics models, we show how conflict between genomes can be self-limiting, while cooperation and mutualism tend to be self-accelerating. It is likely that this bias in the evolutionary dynamics of interaction between hosts and symbionts is an important feature of holobionts. This bias in the evolutionary dynamic could contribute to explaining the absence of cheaters from natural mutualisms, although cheaters are predicted by gene-centered conflict theory to cause the evolutionary instability of mutualisms. Additionally, it may help explain the more frequent origin of mutualisms from parasitic than from free-living systems, an evolutionary trajectory opposite to that predicted by genome conflict theory.



Reflections on a Biometrics of Organismal Form

Abstract

Back in 1987 the physicist/theoretical biologist Walter Elsasser reviewed a range of philosophical issues at the foundation of organismal biology above the molecular level. Two of these are particularly relevant to quantifications of form: the concept of ordered heterogeneity and the principle of nonstructural memory, the truism that typically the forms of organisms substantially resemble the forms of their ancestors. This essay attempts to weave Elsasser's principles together with morphometrics (the biometrics of organismal form) for one prominent data type, the representation of animal forms by positions of landmark points. I introduce a new spectrum of pattern descriptors of the shape variation of landmark configurations, ranging from the most homogeneous (uniform shears) through growth gradients and focal features to a model of totally disordered heterogeneity incompatible with the rhetoric of functional explanation. An investigation may end up reporting its findings by one of these descriptors or by several. These descriptors all derive from one shared diagrammatic device: a log–log plot of sample variance against one standard formalism of today's geometric morphometrics, the bending energies of the principal warps that represent all the scales of variability around the sample average shape. The new descriptors, which I demonstrate over a variety of contemporary morphometric examples, may help build the bridge we urgently need between the arithmetic of today's burgeoning image-based data resources and the rhetoric of biological explanations aligned with the principles of Elsasser along with an even earlier philosopher of biology, the Viennese visionary Hans Przibram.



How Did Language Evolve? Some Reflections on the Language Parasite Debate

Abstract

The language parasite approach (LPA) refers to the view that language, like a parasite, is an adaptive system that evolves to fit its human hosts. Supported by recent computer simulations, LPA proponents claim that the reason that humans can use languages with ease is not because we have evolved with genetically specified linguistic instincts but because languages have adapted to the preexisting brain structures of humans. This article examines the LPA. It argues that, while the LPA has advantages over its rival, Chomskyan nativism, there are additional factors that may limit linguistic variety that have yet to be identified by its insightful proponents. This article suggests abandoning the search for a decisive cause of language capacity and argues that language evolution is more likely to arise from balancing multiple engineering constraints.



Alexandros Sfakianakis
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
6948891480


Alexandros Sfakianakis
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
6948891480