Probabilistic estimation of residential air exchange rates for ...
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of
An overview of the ENEA activities in the field of coupled codes NPP simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parisi, C.; Negrenti, E.; Sepielli, M.
2012-07-01
In the framework of the nuclear research activities in the fields of safety, training and education, ENEA (the Italian National Agency for New Technologies, Energy and the Sustainable Development) is in charge of defining and pursuing all the necessary steps for the development of a NPP engineering simulator at the 'Casaccia' Research Center near Rome. A summary of the activities in the field of the nuclear power plants simulation by coupled codes is here presented with the long term strategy for the engineering simulator development. Specifically, results from the participation in international benchmarking activities like the OECD/NEA 'Kalinin-3' benchmark andmore » the 'AER-DYN-002' benchmark, together with simulations of relevant events like the Fukushima accident, are here reported. The ultimate goal of such activities performed using state-of-the-art technology is the re-establishment of top level competencies in the NPP simulation field in order to facilitate the development of Enhanced Engineering Simulators and to upgrade competencies for supporting national energy strategy decisions, the nuclear national safety authority, and the R and D activities on NPP designs. (authors)« less
Different regulation of limb development by p63 transcript variants.
Kawata, Manabu; Taniguchi, Yuki; Mori, Daisuke; Yano, Fumiko; Ohba, Shinsuke; Chung, Ung-Il; Shimogori, Tomomi; Mills, Alea A; Tanaka, Sakae; Saito, Taku
2017-01-01
The apical ectodermal ridge (AER), located at the distal end of each limb bud, is a key signaling center which controls outgrowth and patterning of the proximal-distal axis of the limb through secretion of various molecules. Fibroblast growth factors (FGFs), particularly Fgf8 and Fgf4, are representative molecules produced by AER cells, and essential to maintain the AER and cell proliferation in the underlying mesenchyme, meanwhile Jag2-Notch pathway negatively regulates the AER and limb development. p63, a transcription factor of the p53 family, is expressed in the AER and indispensable for limb formation. However, the underlying mechanisms and specific roles of p63 variants are unknown. Here, we quantified the expression of p63 variants in mouse limbs from embryonic day (E) 10.5 to E12.5, and found that ΔNp63γ was strongly expressed in limbs at all stages, while TAp63γ expression was rapidly increased in the later stages. Fluorescence-activated cell sorting analysis of limb bud cells from reporter mouse embryos at E11.5 revealed that all variants were abundantly expressed in AER cells, and their expression was very low in mesenchymal cells. We then generated AER-specific p63 knockout mice by mating mice with a null and a flox allele of p63, and Msx2-Cre mice (Msx2-Cre;p63Δ/fl). Msx2-Cre;p63Δ/fl neonates showed limb malformation that was more obvious in distal elements. Expression of various AER-related genes was decreased in Msx2-Cre;p63Δ/fl limb buds and embryoid bodies formed by p63-knockdown induced pluripotent stem cells. Promoter analyses and chromatin immunoprecipitation assays demonstrated Fgf8 and Fgf4 as transcriptional targets of ΔNp63γ, and Jag2 as that of TAp63γ. Furthermore, TAp63γ overexpression exacerbated the phenotype of Msx2-Cre;p63Δ/fl mice. These data indicate that ΔNp63 and TAp63 control limb development through transcriptional regulation of different target molecules with different roles in the AER. Our findings contribute to further understanding of the molecular network of limb development.
Amusia and protolanguage impairments in schizophrenia
Kantrowitz, J. T.; Scaramello, N.; Jakubovitz, A.; Lehrfeld, J. M.; Laukka, P.; Elfenbein, H. A.; Silipo, G.; Javitt, D. C.
2017-01-01
Background Both language and music are thought to have evolved from a musical protolanguage that communicated social information, including emotion. Individuals with perceptual music disorders (amusia) show deficits in auditory emotion recognition (AER). Although auditory perceptual deficits have been studied in schizophrenia, their relationship with musical/protolinguistic competence has not previously been assessed. Method Musical ability was assessed in 31 schizophrenia/schizo-affective patients and 44 healthy controls using the Montreal Battery for Evaluation of Amusia (MBEA). AER was assessed using a novel battery in which actors provided portrayals of five separate emotions. The Disorganization factor of the Positive and Negative Syndrome Scale (PANSS) was used as a proxy for language/thought disorder and the MATRICS Consensus Cognitive Battery (MCCB) was used to assess cognition. Results Highly significant deficits were seen between patients and controls across auditory tasks (p<0.001). Moreover, significant differences were seen in AER between the amusia and intact music-perceiving groups, which remained significant after controlling for group status and education. Correlations with AER were specific to the melody domain, and correlations between protolanguage (melody domain) and language were independent of overall cognition. Discussion This is the first study to document a specific relationship between amusia, AER and thought disorder, suggesting a shared linguistic/protolinguistic impairment. Once amusia was considered, other cognitive factors were no longer significant predictors of AER, suggesting that musical ability in general and melodic discrimination ability in particular may be crucial targets for treatment development and cognitive remediation in schizophrenia. PMID:25066878
Amusia and protolanguage impairments in schizophrenia.
Kantrowitz, J T; Scaramello, N; Jakubovitz, A; Lehrfeld, J M; Laukka, P; Elfenbein, H A; Silipo, G; Javitt, D C
2014-10-01
Both language and music are thought to have evolved from a musical protolanguage that communicated social information, including emotion. Individuals with perceptual music disorders (amusia) show deficits in auditory emotion recognition (AER). Although auditory perceptual deficits have been studied in schizophrenia, their relationship with musical/protolinguistic competence has not previously been assessed. Musical ability was assessed in 31 schizophrenia/schizo-affective patients and 44 healthy controls using the Montreal Battery for Evaluation of Amusia (MBEA). AER was assessed using a novel battery in which actors provided portrayals of five separate emotions. The Disorganization factor of the Positive and Negative Syndrome Scale (PANSS) was used as a proxy for language/thought disorder and the MATRICS Consensus Cognitive Battery (MCCB) was used to assess cognition. Highly significant deficits were seen between patients and controls across auditory tasks (p < 0.001). Moreover, significant differences were seen in AER between the amusia and intact music-perceiving groups, which remained significant after controlling for group status and education. Correlations with AER were specific to the melody domain, and correlations between protolanguage (melody domain) and language were independent of overall cognition. This is the first study to document a specific relationship between amusia, AER and thought disorder, suggesting a shared linguistic/protolinguistic impairment. Once amusia was considered, other cognitive factors were no longer significant predictors of AER, suggesting that musical ability in general and melodic discrimination ability in particular may be crucial targets for treatment development and cognitive remediation in schizophrenia.
Development of the Test Of Astronomy STandards (TOAST) Assessment Instrument
NASA Astrophysics Data System (ADS)
Slater, Timothy F.; Slater, S. J.
2008-05-01
Considerable effort in the astronomy education research (AER) community over the past several years has focused on developing assessment tools in the form of multiple-choice conceptual diagnostics and content knowledge surveys. This has been critically important in advancing the AER discipline so that researchers could establish the initial knowledge state of students as well as to attempt measure some of the impacts of innovative instructional interventions. Unfortunately, few of the existing instruments were constructed upon a solid list of clearly articulated and widely agreed upon learning objectives. This was not done in oversight, but rather as a result of the relative youth of AER as a discipline. Now that several important science education reform documents exist and are generally accepted by the AER community, we are in a position to develop, validate, and disseminate a new assessment instrument which is tightly aligned to the consensus learning goals stated by the American Astronomical Society - Chair's Conference on ASTRO 101, the American Association of the Advancement of Science's Project 2061 Benchmarks, and the National Research Council's National Science Education Standards. In response, researchers from the Cognition in Astronomy, Physics and Earth sciences Research (CAPER) Team at the University of Wyoming's Science & Math Teaching Center (UWYO SMTC) have designed a criterion-referenced assessment tool, called the Test Of Astronomy STandards (TOAST). Through iterative development, this instrument has a high degree of reliability and validity for instructors and researchers needing information on students’ initial knowledge state at the beginning of a course and can be used, in aggregate, to help measure the impact of course-length duration instructional strategies for courses with learning goals tightly aligned to the consensus goals of our community.
Breen, Michael S; Burke, Janet M; Batterman, Stuart A; Vette, Alan F; Godwin, Christopher; Croghan, Carry W; Schultz, Bradley D; Long, Thomas C
2014-11-07
Air pollution health studies often use outdoor concentrations as exposure surrogates. Failure to account for variability of residential infiltration of outdoor pollutants can induce exposure errors and lead to bias and incorrect confidence intervals in health effect estimates. The residential air exchange rate (AER), which is the rate of exchange of indoor air with outdoor air, is an important determinant for house-to-house (spatial) and temporal variations of air pollution infiltration. Our goal was to evaluate and apply mechanistic models to predict AERs for 213 homes in the Near-Road Exposures and Effects of Urban Air Pollutants Study (NEXUS), a cohort study of traffic-related air pollution exposures and respiratory effects in asthmatic children living near major roads in Detroit, Michigan. We used a previously developed model (LBL), which predicts AER from meteorology and questionnaire data on building characteristics related to air leakage, and an extended version of this model (LBLX) that includes natural ventilation from open windows. As a critical and novel aspect of our AER modeling approach, we performed a cross validation, which included both parameter estimation (i.e., model calibration) and model evaluation, based on daily AER measurements from a subset of 24 study homes on five consecutive days during two seasons. The measured AER varied between 0.09 and 3.48 h(-1) with a median of 0.64 h(-1). For the individual model-predicted and measured AER, the median absolute difference was 29% (0.19 h‑1) for both the LBL and LBLX models. The LBL and LBLX models predicted 59% and 61% of the variance in the AER, respectively. Daily AER predictions for all 213 homes during the three year study (2010-2012) showed considerable house-to-house variations from building leakage differences, and temporal variations from outdoor temperature and wind speed fluctuations. Using this novel approach, NEXUS will be one of the first epidemiology studies to apply calibrated and home-specific AER models, and to include the spatial and temporal variations of AER for over 200 individual homes across multiple years into an exposure assessment in support of improving risk estimates.
Tabatabai, Reza; Baptista, Sheryl; Tiozzo, Caterina; Carraro, Gianni; Wheeler, Matthew; Barreto, Guillermo; Braun, Thomas; Li, Xiaokun; Hajihosseini, Mohammad K.; Bellusci, Saverio
2013-01-01
The vertebrate limbs develop through coordinated series of inductive, growth and patterning events. Fibroblast Growth Factor receptor 2b (FGFR2b) signaling controls the induction of the Apical Ectodermal Ridge (AER) but its putative roles in limb outgrowth and patterning, as well as in AER morphology and cell behavior have remained unclear. We have investigated these roles through graded and reversible expression of soluble dominant-negative FGFR2b molecules at various times during mouse limb development, using a doxycycline/transactivator/tet(O)-responsive system. Transient attenuation (≤24 hours) of FGFR2b-ligands signaling at E8.5, prior to limb bud induction, leads mostly to the loss or truncation of proximal skeletal elements with less severe impact on distal elements. Attenuation from E9.5 onwards, however, has an irreversible effect on the stability of the AER, resulting in a progressive loss of distal limb skeletal elements. The primary consequences of FGFR2b-ligands attenuation is a transient loss of cell adhesion and down-regulation of P63, β1-integrin and E-cadherin, and a permanent loss of cellular β-catenin organization and WNT signaling within the AER. Combined, these effects lead to the progressive transformation of the AER cells from pluristratified to squamous epithelial-like cells within 24 hours of doxycycline administration. These findings show that FGFR2b-ligands signaling has critical stage-specific roles in maintaining the AER during limb development. PMID:24167544
2007-01-01
In this Evaluation, we examine whether the Steris Reliance EPS--a flexible endoscope reprocessing system that was recently introduced to the U.S. market--offers meaningful advantages over "traditional" automated endoscope reprocessors (AERs). Most AERs on the market function similarly to one another. The Reliance EPS, however, includes some unique features that distinguish it from other AERs. For example, it incorporates a "boot" technology for loading the endoscopes into the unit without requiring a lot of endoscope-specific connectors, and it dispenses the germicide used to disinfect the endoscopes from a single-use container. This Evaluation looks at whether the unique features of this model make it a better choice than traditional AERs for reprocessing flexible endoscopes. Our study focuses on whether the Reliance EPS is any more likely to be used correctly-thereby reducing the likelihood that an endoscope will be reprocessed inadequately-and whether the unit possesses any design flaws that could lead to reprocessing failures. We detail the unit's advantages and disadvantages compared with other AERs, and we describe what current users have to say. Our conclusions will help facilities determine whether to select the Reliance EPS.
Swab culture monitoring of automated endoscope reprocessors after high-level disinfection
Lu, Lung-Sheng; Wu, Keng-Liang; Chiu, Yi-Chun; Lin, Ming-Tzung; Hu, Tsung-Hui; Chiu, King-Wah
2012-01-01
AIM: To conduct a bacterial culture study for monitoring decontamination of automated endoscope reprocessors (AERs) after high-level disinfection (HLD). METHODS: From February 2006 to January 2011, authors conducted randomized consecutive sampling each month for 7 AERs. Authors collected a total of 420 swab cultures, including 300 cultures from 5 gastroscope AERs, and 120 cultures from 2 colonoscope AERs. Swab cultures were obtained from the residual water from the AERs after a full reprocessing cycle. Samples were cultured to test for aerobic bacteria, anaerobic bacteria, and mycobacterium tuberculosis. RESULTS: The positive culture rate of the AERs was 2.0% (6/300) for gastroscope AERs and 0.8% (1/120) for colonoscope AERs. All the positive cultures, including 6 from gastroscope and 1 from colonoscope AERs, showed monofloral colonization. Of the gastroscope AER samples, 50% (3/6) were colonized by aerobic bacterial and 50% (3/6) by fungal contaminations. CONCLUSION: A full reprocessing cycle of an AER with HLD is adequate for disinfection of the machine. Swab culture is a useful method for monitoring AER decontamination after each reprocessing cycle. Fungal contamination of AERs after reprocessing should also be kept in mind. PMID:22529696
Van Ryswyk, K; Wallace, L; Fugler, D; MacNeill, M; Héroux, M È; Gibson, M D; Guernsey, J R; Kindzierski, W; Wheeler, A J
2015-01-01
Residential air exchange rates (AERs) are vital in understanding the temporal and spatial drivers of indoor air quality (IAQ). Several methods to quantify AERs have been used in IAQ research, often with the assumption that the home is a single, well-mixed air zone. Since 2005, Health Canada has conducted IAQ studies across Canada in which AERs were measured using the perfluorocarbon tracer (PFT) gas method. Emitters and detectors of a single PFT gas were placed on the main floor to estimate a single-zone AER (AER1z). In three of these studies, a second set of emitters and detectors were deployed in the basement or second floor in approximately 10% of homes for a two-zone AER estimate (AER2z). In total, 287 daily pairs of AER2z and AER1z estimates were made from 35 homes across three cities. In 87% of the cases, AER2z was higher than AER1z. Overall, the AER1z estimates underestimated AER2z by approximately 16% (IQR: 5–32%). This underestimate occurred in all cities and seasons and varied in magnitude seasonally, between homes, and daily, indicating that when measuring residential air exchange using a single PFT gas, the assumption of a single well-mixed air zone very likely results in an under prediction of the AER. PMID:25399878
Ellis, Shmuel; Ganzach, Yoav; Castle, Evan; Sekely, Gal
2010-01-01
In the current study, we compared the effect of personal and filmed after-event reviews (AERs) on performance, and the role that self-efficacy plays in moderating and mediating the effects of these 2 types of AER on performance. The setting was one in which 49 men and 63 women participated twice in a simulated business decision-making task. In between, participants received a personal AER, watched a filmed AER, or had a break. We found that individuals who participated in an AER, whether personal or filmed, improved their performance significantly more than those who did not participate in a review. Furthermore, there was no significant difference in performance improvement between the personal and the filmed AER, which suggests that the 2 are quite similar in their effect. We also found that the differences in performance improvement between the personal AER group and the control group were somewhat greater than those found in the filmed AER group. Self-efficacy mediated the effect of AER on performance improvement in both types of AER. In addition, the effect of AER on performance improvement was moderated by initial self-efficacy in the personal but not in the filmed AER: The personal AER was more effective, the higher the initial self-efficacy. Copyright 2009 APA, all rights reserved.
A deep azygoesophageal recess may increase the risk of secondary spontaneous pneumothorax.
Takahashi, Tsuyoshi; Kawashima, Mitsuaki; Kuwano, Hideki; Nagayama, Kazuhiro; Nitadori, Jyunichi; Anraku, Masaki; Sato, Masaaki; Murakawa, Tomohiro; Nakajima, Jun
2017-09-01
The azygoesophageal recess (AER) is known as a possible cause of bulla formation in patients with spontaneous pneumothorax. However, there has been little focus on the depth of the AER. We evaluated the relationship between the depth of the AER and pneumothorax development. We conducted a retrospective study of 80 spontaneous pneumothorax patients who underwent surgery at our institution. We evaluated the depth of the AER on preoperative computed tomography scans. Ruptured bullae at the AER were found in 12 patients (52.2%) with secondary spontaneous pneumothorax (SSP) and 8 patients (14.0%) with primary spontaneous pneumothorax (PSP) (p < 0.001). In patients with ruptured bullae at the AER, 10 SSP patients (83.3%) had a deep AER while only 2 PSP patients (25%) had a deep AER (p = 0.015). A deep AER was more frequently associated with SSP than with PSP. A deep AER may contributes to bulla formation and rupture in SSP patients.
Tobe, Russell H; Corcoran, Cheryl M; Breland, Melissa; MacKay-Brandt, Anna; Klim, Casimir; Colcombe, Stanley J; Leventhal, Bennett L; Javitt, Daniel C
2016-08-01
Impairment in social cognition, including emotion recognition, has been extensively studied in both Autism Spectrum Disorders (ASD) and Schizophrenia (SZ). However, the relative patterns of deficit between disorders have been studied to a lesser degree. Here, we applied a social cognition battery incorporating both auditory (AER) and visual (VER) emotion recognition measures to a group of 19 high-functioning individuals with ASD relative to 92 individuals with SZ, and 73 healthy control adult participants. We examined group differences and correlates of basic auditory processing and processing speed. Individuals with SZ were impaired in both AER and VER while ASD individuals were impaired in VER only. In contrast to SZ participants, those with ASD showed intact basic auditory function. Our finding of a dissociation between AER and VER deficits in ASD relative to Sz support modality-specific theories of emotion recognition dysfunction. Future studies should focus on visual system-specific contributions to social cognitive impairment in ASD. Copyright © 2016 Elsevier Ltd. All rights reserved.
Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.
Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C
2015-11-04
Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition and global functional outcome. This study evaluated neural substrates of impaired AER in schizophrenia using a combined event-related potential/resting-state fMRI approach. Patients showed impaired mismatch negativity response to emotionally relevant frequency modulated tones along with impaired functional connectivity between auditory and medial temporal (anterior insula) cortex. These deficits contributed in parallel to impaired AER and accounted for ∼50% of variance in AER performance. Overall, these findings demonstrate the importance of both auditory-level dysfunction and impaired auditory/insula connectivity in the pathophysiology of social cognitive dysfunction in schizophrenia. Copyright © 2015 the authors 0270-6474/15/3514910-13$15.00/0.
Vlaeminck, Siegfried E.; Terada, Akihiko; Smets, Barth F.; De Clippeleir, Haydée; Schaubroeck, Thomas; Bolca, Selin; Demeestere, Lien; Mast, Jan; Boon, Nico; Carballa, Marta; Verstraete, Willy
2010-01-01
Aerobic ammonium-oxidizing bacteria (AerAOB) and anoxic ammonium-oxidizing bacteria (AnAOB) cooperate in partial nitritation/anammox systems to remove ammonium from wastewater. In this process, large granular microbial aggregates enhance the performance, but little is known about granulation so far. In this study, three suspended-growth oxygen-limited autotrophic nitrification-denitrification (OLAND) reactors with different inoculation and operation (mixing and aeration) conditions, designated reactors A, B, and C, were used. The test objectives were (i) to quantify the AerAOB and AnAOB abundance and the activity balance for the different aggregate sizes and (ii) to relate aggregate morphology, size distribution, and architecture putatively to the inoculation and operation of the three reactors. A nitrite accumulation rate ratio (NARR) was defined as the net aerobic nitrite production rate divided by the anoxic nitrite consumption rate. The smallest reactor A, B, and C aggregates were nitrite sources (NARR, >1.7). Large reactor A and C aggregates were granules capable of autonomous nitrogen removal (NARR, 0.6 to 1.1) with internal AnAOB zones surrounded by an AerAOB rim. Around 50% of the autotrophic space in these granules consisted of AerAOB- and AnAOB-specific extracellular polymeric substances. Large reactor B aggregates were thin film-like nitrite sinks (NARR, <0.5) in which AnAOB were not shielded by an AerAOB layer. Voids and channels occupied 13 to 17% of the anoxic zone of AnAOB-rich aggregates (reactors B and C). The hypothesized granulation pathways include granule replication by division and budding and are driven by growth and/or decay based on species-specific physiology and by hydrodynamic shear and mixing. PMID:19948857
Scholze, Stefan; Schiefer, Stefan; Partzsch, Johannes; Hartmann, Stephan; Mayr, Christian Georg; Höppner, Sebastian; Eisenreich, Holger; Henker, Stephan; Vogginger, Bernhard; Schüffny, Rene
2011-01-01
State-of-the-art large-scale neuromorphic systems require sophisticated spike event communication between units of the neural network. We present a high-speed communication infrastructure for a waferscale neuromorphic system, based on application-specific neuromorphic communication ICs in an field programmable gate arrays (FPGA)-maintained environment. The ICs implement configurable axonal delays, as required for certain types of dynamic processing or for emulating spike-based learning among distant cortical areas. Measurements are presented which show the efficacy of these delays in influencing behavior of neuromorphic benchmarks. The specialized, dedicated address-event-representation communication in most current systems requires separate, low-bandwidth configuration channels. In contrast, the configuration of the waferscale neuromorphic system is also handled by the digital packet-based pulse channel, which transmits configuration data at the full bandwidth otherwise used for pulse transmission. The overall so-called pulse communication subgroup (ICs and FPGA) delivers a factor 25–50 more event transmission rate than other current neuromorphic communication infrastructures. PMID:22016720
Van Ryswyk, K; Wallace, L; Fugler, D; MacNeill, M; Héroux, M È; Gibson, M D; Guernsey, J R; Kindzierski, W; Wheeler, A J
2015-12-01
Residential air exchange rates (AERs) are vital in understanding the temporal and spatial drivers of indoor air quality (IAQ). Several methods to quantify AERs have been used in IAQ research, often with the assumption that the home is a single, well-mixed air zone. Since 2005, Health Canada has conducted IAQ studies across Canada in which AERs were measured using the perfluorocarbon tracer (PFT) gas method. Emitters and detectors of a single PFT gas were placed on the main floor to estimate a single-zone AER (AER(1z)). In three of these studies, a second set of emitters and detectors were deployed in the basement or second floor in approximately 10% of homes for a two-zone AER estimate (AER(2z)). In total, 287 daily pairs of AER(2z) and AER(1z) estimates were made from 35 homes across three cities. In 87% of the cases, AER(2z) was higher than AER(1z). Overall, the AER(1z) estimates underestimated AER(2z) by approximately 16% (IQR: 5-32%). This underestimate occurred in all cities and seasons and varied in magnitude seasonally, between homes, and daily, indicating that when measuring residential air exchange using a single PFT gas, the assumption of a single well-mixed air zone very likely results in an under prediction of the AER. The results of this study suggest that the long-standing assumption that a home represents a single well-mixed air zone may result in a substantial negative bias in air exchange estimates. Indoor air quality professionals should take this finding into consideration when developing study designs or making decisions related to the recommendation and installation of residential ventilation systems. © 2014 Her Majesty the Queen in Right of Canada. Indoor Air published by John Wiley & Sons Ltd Reproduced with the permission of the Minister of Health Canada.
Darby, Stephen E; Leyland, Julian; Kummu, Matti; Räsänen, Timo A; Lauri, Hannu
2013-04-01
We evaluate links between climate and simulated river bank erosion for one of the world's largest rivers, the Mekong. We employ a process-based model to reconstruct multidecadal time series of bank erosion at study sites within the Mekong's two main hydrological response zones, defining a new parameter, accumulated excess runoff (AER), pertinent to bank erosion. We employ a hydrological model to isolate how snowmelt, tropical storms and monsoon precipitation each contribute to AER and thus modeled bank erosion. Our results show that melt (23.9% at the upstream study site, declining to 11.1% downstream) and tropical cyclones (17.5% and 26.4% at the upstream and downstream sites, respectively) both force significant fractions of bank erosion on the Mekong. We also show (i) small, but significant, declines in AER and hence assumed bank erosion during the 20th century, and; (ii) that significant correlations exist between AER and the Indian Ocean Dipole (IOD) and El Niño Southern Oscillation (ENSO). Of these modes of climate variability, we find that IOD events exert a greater control on simulated bank erosion than ENSO events; but the influences of both ENSO and IOD when averaged over several decades are found to be relatively weak. However, importantly, relationships between ENSO, IOD, and AER and hence inferred river bank erosion are not time invariant. Specifically, we show that there is an intense and prolonged epoch of strong coherence between ENSO and AER from the early 1980s to present, such that in recent decades derived Mekong River bank erosion has been more strongly affected by ENSO.
Darby, Stephen E; Leyland, Julian; Kummu, Matti; Räsänen, Timo A; Lauri, Hannu
2013-01-01
We evaluate links between climate and simulated river bank erosion for one of the world's largest rivers, the Mekong. We employ a process-based model to reconstruct multidecadal time series of bank erosion at study sites within the Mekong's two main hydrological response zones, defining a new parameter, accumulated excess runoff (AER), pertinent to bank erosion. We employ a hydrological model to isolate how snowmelt, tropical storms and monsoon precipitation each contribute to AER and thus modeled bank erosion. Our results show that melt (23.9% at the upstream study site, declining to 11.1% downstream) and tropical cyclones (17.5% and 26.4% at the upstream and downstream sites, respectively) both force significant fractions of bank erosion on the Mekong. We also show (i) small, but significant, declines in AER and hence assumed bank erosion during the 20th century, and; (ii) that significant correlations exist between AER and the Indian Ocean Dipole (IOD) and El Niño Southern Oscillation (ENSO). Of these modes of climate variability, we find that IOD events exert a greater control on simulated bank erosion than ENSO events; but the influences of both ENSO and IOD when averaged over several decades are found to be relatively weak. However, importantly, relationships between ENSO, IOD, and AER and hence inferred river bank erosion are not time invariant. Specifically, we show that there is an intense and prolonged epoch of strong coherence between ENSO and AER from the early 1980s to present, such that in recent decades derived Mekong River bank erosion has been more strongly affected by ENSO. PMID:23926362
Samanta, Dipanjan; Widom, Joanne; Borbat, Peter P; Freed, Jack H; Crane, Brian R
2016-12-09
Flagellated bacteria modulate their swimming behavior in response to environmental cues through the CheA/CheY signaling pathway. In addition to responding to external chemicals, bacteria also monitor internal conditions that reflect the availability of oxygen, light, and reducing equivalents, in a process termed "energy taxis." In Escherichia coli, the transmembrane receptor Aer is the primary energy sensor for motility. Genetic and physiological data suggest that Aer monitors the electron transport chain through the redox state of its FAD cofactor. However, direct biochemical data correlating FAD redox chemistry with CheA kinase activity have been lacking. Here, we test this hypothesis via functional reconstitution of Aer into nanodiscs. As purified, Aer contains fully oxidized FAD, which can be chemically reduced to the anionic semiquinone (ASQ). Oxidized Aer activates CheA, whereas ASQ Aer reversibly inhibits CheA. Under these conditions, Aer cannot be further reduced to the hydroquinone, in contrast to the proposed Aer signaling model. Pulse ESR spectroscopy of the ASQ corroborates a potential mechanism for signaling in that the resulting distance between the two flavin-binding PAS (Per-Arnt-Sim) domains implies that they tightly sandwich the signal-transducing HAMP domain in the kinase-off state. Aer appears to follow oligomerization patterns observed for related chemoreceptors, as higher loading of Aer dimers into nanodiscs increases kinase activity. These results provide a new methodological platform to study Aer function along with new mechanistic details into its signal transduction process. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Measuring and modeling air exchange rates inside taxi cabs in Los Angeles, California
NASA Astrophysics Data System (ADS)
Shu, Shi; Yu, Nu; Wang, Yueyan; Zhu, Yifang
2015-12-01
Air exchange rates (AERs) have a direct impact on traffic-related air pollutant (TRAP) levels inside vehicles. Taxi drivers are occupationally exposed to TRAP on a daily basis, yet there is limited measurement of AERs in taxi cabs. To fill this gap, AERs were quantified in 22 representative Los Angeles taxi cabs including 10 Prius, 5 Crown Victoria, 3 Camry, 3 Caravan, and 1 Uplander under realistic driving (RD) conditions. To further study the impacts of window position and ventilation settings on taxi AERs, additional tests were conducted on 14 taxis with windows closed (WC) and on the other 8 taxis with not only windows closed but also medium fan speed (WC-MFS) under outdoor air mode. Under RD conditions, the AERs in all 22 cabs had a mean of 63 h-1 with a median of 38 h-1. Similar AERs were observed under WC condition when compared to those measured under RD condition. Under WC-MFS condition, AERs were significantly increased in all taxi cabs, when compared with those measured under RD condition. A General Estimating Equation (GEE) model was developed and the modeling results showed that vehicle model was a significant factor in determining the AERs in taxi cabs under RD condition. Driving speed and car age were positively associated with AERs but not statistically significant. Overall, AERs measured in taxi cabs were much higher than typical AERs people usually encounter in indoor environments such as homes, offices, and even regular passenger vehicles.
AER synthetic generation in hardware for bio-inspired spiking systems
NASA Astrophysics Data System (ADS)
Linares-Barranco, Alejandro; Linares-Barranco, Bernabe; Jimenez-Moreno, Gabriel; Civit-Balcells, Anton
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate 'events' according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. When building multi-chip muti-layered AER systems it is absolutely necessary to have a computer interface that allows (a) to read AER interchip traffic into the computer and visualize it on screen, and (b) convert conventional frame-based video stream in the computer into AER and inject it at some point of the AER structure. This is necessary for test and debugging of complex AER systems. This paper addresses the problem of converting, in a computer, a conventional frame-based video stream into the spike event based representation AER. There exist several proposed software methods for synthetic generation of AER for bio-inspired systems. This paper presents a hardware implementation for one method, which is based on Linear-Feedback-Shift-Register (LFSR) pseudo-random number generation. The sequence of events generated by this hardware, which follows a Poisson distribution like a biological neuron, has been reconstructed using two AER integrator cells. The error of reconstruction for a set of images that produces different traffic loads of event in the AER bus is used as evaluation criteria. A VHDL description of the method, that includes the Xilinx PCI Core, has been implemented and tested using a general purpose PCI-AER board. This PCI-AER board has been developed by authors, and uses a Spartan II 200 FPGA. This system for AER Synthetic Generation is capable of transforming frames of 64x64 pixels, received through a standard computer PCI bus, at a frame rate of 25 frames per second, producing spike events at a peak rate of 107 events per second.
del Carmen Burón-Barral, Maria; Gosink, Khoosheh K.; Parkinson, John S.
2006-01-01
The Escherichia coli Aer protein contains an N-terminal PAS domain that binds flavin adenine dinucleotide (FAD), senses aerotactic stimuli, and communicates with the output signaling domain. To explore the roles of the intervening F1 and HAMP segments in Aer signaling, we isolated plasmid-borne aerotaxis-defective mutations in a host strain lacking all chemoreceptors of the methyl-accepting chemotaxis protein (MCP) family. Under these conditions, Aer alone established the cell's run/tumble swimming pattern and modulated that behavior in response to oxygen gradients. We found two classes of Aer mutants: null and clockwise (CW) biased. Most mutant proteins exhibited the null phenotype: failure to elicit CW flagellar rotation, no aerosensing behavior in MCP-containing hosts, and no apparent FAD-binding ability. However, null mutants had low Aer expression levels caused by rapid degradation of apparently nonnative subunits. Their functional defects probably reflect the absence of a protein product. In contrast, CW-biased mutant proteins exhibited normal expression levels, wild-type FAD binding, and robust aerosensing behavior in MCP-containing hosts. The CW lesions evidently shift unstimulated Aer output to the CW signaling state but do not block the Aer input-output pathway. The distribution and properties of null and CW-biased mutations suggest that the Aer PAS domain may engage in two different interactions with HAMP and the HAMP-proximal signaling domain: one needed for Aer maturation and another for promoting CW output from the Aer signaling domain. Most aerotaxis-defective null mutations in these regions seemed to affect maturation only, indicating that these two interactions involve structurally distinct determinants. PMID:16672601
NASA Astrophysics Data System (ADS)
Gómez-Rodríguez, F.; Linares-Barranco, A.; Paz, R.; Miró-Amarante, L.; Jiménez, G.; Civit, A.
2007-05-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows real-time virtual massive connectivity among huge number of neurons located on different chips.[1] By exploiting high speed digital communication circuits (with nano-seconds timing), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Neurons generate "events" according to their activity levels. That is, more active neurons generate more events per unit time and access the interchip communication channel more frequently than neurons with low activity. In Neuromorphic system development, AER brings some advantages to develop real-time image processing system: (1) AER represents the information like time continuous stream not like a frame; (2) AER sends the most important information first (although this depends on the sender); (3) AER allows to process information as soon as it is received. When AER is used in artificial vision field, each pixel is considered like a neuron, so pixel's intensity is represented like a sequence of events; modifying the number and the frequency of these events, it is possible to make some image filtering. In this paper we present four image filters using AER: (a) Noise addition and suppression, (b) brightness modification, (c) single moving object tracking and (d) geometrical transformations (rotation, translation, reduction and magnification). For testing and debugging, we use USB-AER board developed by Robotic and Technology of Computers Applied to Rehabilitation (RTCAR) research group. This board is based on an FPGA, devoted to manage the AER functionality. This board also includes a micro-controlled for USB communication, 2 Mbytes RAM and 2 AER ports (one for input and one for output).
Breen, Michael S; Breen, Miyuki; Williams, Ronald W; Schultz, Bradley D
2010-12-15
A critical aspect of air pollution exposure models is the estimation of the air exchange rate (AER) of individual homes, where people spend most of their time. The AER, which is the airflow into and out of a building, is a primary mechanism for entry of outdoor air pollutants and removal of indoor source emissions. The mechanistic Lawrence Berkeley Laboratory (LBL) AER model was linked to a leakage area model to predict AER from questionnaires and meteorology. The LBL model was also extended to include natural ventilation (LBLX). Using literature-reported parameter values, AER predictions from LBL and LBLX models were compared to data from 642 daily AER measurements across 31 detached homes in central North Carolina, with corresponding questionnaires and meteorological observations. Data was collected on seven consecutive days during each of four consecutive seasons. For the individual model-predicted and measured AER, the median absolute difference was 43% (0.17 h(-1)) and 40% (0.17 h(-1)) for the LBL and LBLX models, respectively. Additionally, a literature-reported empirical scale factor (SF) AER model was evaluated, which showed a median absolute difference of 50% (0.25 h(-1)). The capability of the LBL, LBLX, and SF models could help reduce the AER uncertainty in air pollution exposure models used to develop exposure metrics for health studies.
Phetrak, Athit; Lohwacharin, Jenyuk; Sakai, Hiroshi; Murakami, Michio; Oguma, Kumiko; Takizawa, Satoshi
2014-06-01
Anion exchange resins (AERs) with different properties were evaluated for their ability to remove dissolved organic matter (DOM) and bromide, and to reduce disinfection by-product (DBP) formation potentials of water collected from a eutrophic surface water source in Japan. DOM and bromide were simultaneously removed by all selected AERs in batch adsorption experiments. A polyacrylic magnetic ion exchange resin (MIEX®) showed faster dissolved organic carbon (DOC) removal than other AERs because it had the smallest resin bead size. Aromatic DOM fractions with molecular weight larger than 1600 Da and fluorescent organic fractions of fulvic acid- and humic acid-like compounds were efficiently removed by all AERs. Polystyrene AERs were more effective in bromide removal than polyacrylic AERs. This result implied that the properties of AERs, i.e. material and resin size, influenced not only DOM removal but also bromide removal efficiency. MIEX® showed significant chlorinated DBP removal because it had the highest DOC removal within 30 min, whereas polystyrene AERs efficiently removed brominated DBPs, especially brominated trihalomethane species. The results suggested that, depending on source water DOM and bromide concentration, selecting a suitable AER is a key factor in effective control of chlorinated and brominated DBPs in drinking water. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.
Long-term risks of subsequent primary neoplasms among survivors of childhood cancer.
Reulen, Raoul C; Frobisher, Clare; Winter, David L; Kelly, Julie; Lancashire, Emma R; Stiller, Charles A; Pritchard-Jones, Kathryn; Jenkinson, Helen C; Hawkins, Michael M
2011-06-08
Survivors of childhood cancer are at excess risk of developing subsequent primary neoplasms but the long-term risks are uncertain. To investigate long-term risks of subsequent primary neoplasms in survivors of childhood cancer, to identify the types that contribute most to long-term excess risk, and to identify subgroups of survivors at substantially increased risk of particular subsequent primary neoplasms that may require specific interventions. British Childhood Cancer Survivor Study--a population-based cohort of 17,981 5-year survivors of childhood cancer diagnosed with cancer at younger than 15 years between 1940 and 1991 in Great Britain, followed up through December 2006. Standardized incidence ratios (SIRs), absolute excess risks (AERs), and cumulative incidence of subsequent primary neoplasms. After a median follow-up time of 24.3 years (mean = 25.6 years), 1354 subsequent primary neoplasms were ascertained; the most frequently observed being central nervous system (n = 344), nonmelanoma skin cancer (n = 278), digestive (n = 105), genitourinary (n = 100), breast (n = 97), and bone (n = 94). The overall SIR was 4 times more than expected (SIR, 3.9; 95% confidence interval [CI], 3.6-4.2; AER, 16.8 per 10,000 person-years). The AER at older than 40 years was highest for digestive and genitourinary subsequent primary neoplasms (AER, 5.9 [95% CI, 2.5-9.3]; and AER, 6.0 [95%CI, 2.3-9.6] per 10,000 person-years, respectively); 36% of the total AER was attributable to these 2 subsequent primary neoplasm sites. The cumulative incidence of colorectal cancer for survivors treated with direct abdominopelvic irradiation was 1.4% (95% CI, 0.7%-2.6%) by age 50 years, comparable with the 1.2% risk in individuals with at least 2 first-degree relatives affected by colorectal cancer. Among a cohort of British childhood cancer survivors, the greatest excess risk associated with subsequent primary neoplasms at older than 40 years was for digestive and genitourinary neoplasms.
Fgf16 is essential for pectoral fin bud formation in zebrafish
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, Ryohei; Kamei, Eriko; Hotta, Yuuhei
2006-08-18
Zebrafish pectoral fin bud formation is an excellent model for studying morphogenesis. Fibroblast growth factors (Fgfs) and sonic hedgehog (shh) are essential for pectoral fin bud formation. We found that Fgf16 was expressed in the apical ectodermal ridge (AER) of fin buds. A knockdown of Fgf16 function resulted in no fin bud outgrowth. Fgf16 is required for cell proliferation and differentiation in the mesenchyme and the AER of the fin buds, respectively. Fgf16 functions downstream of Fgf10, a mesenchymal factor, signaling to induce the expression of Fgf4 and Fgf8 in the AER. Fgf16 in the AER and shh in themore » zone of polarizing activity (ZPA) interact to induce and/or maintain each other's expression. These findings have revealed that Fgf16, a newly identified AER factor, plays a crucial role in pectoral fin bud outgrowth by mediating the interactions of AER-mesenchyme and AER-ZPA.« less
Chloride Ion Adsorption Capacity of Anion Exchange Resin in Cement Mortar.
Lee, Yunsu; Lee, Hanseung; Jung, Dohyun; Chen, Zhengxin; Lim, Seungmin
2018-04-05
This paper presents the effect of anion exchange resin (AER) on the adsorption of chloride ions in cement mortar. The kinetic and equilibrium behaviors of AER were investigated in distilled water and Ca(OH)₂ saturated solutions, and then the adsorption of chloride ions by the AER in the mortar specimen was determined. The AER was used as a partial replacement for sand in the mortar specimen. The mortar specimen was coated with epoxy, except for an exposed surface, and then immersed in a NaCl solution for 140 days. The chloride content in the mortar specimen was characterized by energy dispersive X-ray fluorescence analysis and electron probe microanalysis. The results showed that the AER could adsorb the chloride ions from the solution rapidly but had a relatively low performance when the pH of its surrounding environment increased. When the AER was mixed in the cement mortar, its chloride content was higher than that of the cement matrix around it, which confirms the chloride ion adsorption capacity of the AER.
Gong, Yu-Xin; Zhu, Bin; Liu, Guang-Lu; Liu, Lei; Ling, Fei; Wang, Gao-Xue; Xu, Xin-Gang
2015-01-01
To reduce the economic losses caused by diseases in aquaculture industry, more efficient and economic prophylactic measures should be urgently investigated. In this research, the effects of a novel functionalized single-walled carbon nanotubes (SWCNTs) applied as a delivery vehicle for recombinant Aeromonas hydrophila vaccine administration via bath or injection in juvenile grass carp were studied. The results showed that SWCNT as a vector for the recombinant protein aerA, augmented the production of specific antibodies, apparently stimulated the induction of immune-related genes, and induced higher level of survival rate compared with free aerA subunit vaccine. Furthermore, we compared the routes of bath and intramuscular injection immunization by SWCNTs-aerA vaccine, and found that similar antibody levels induced by SWCNTs-aerA were observed in both immunization routes. Meanwhile, a similar relative percentage survival (approximately 80%) was found in both a 40 mg/L bath immunization group, and a 20 μg injection group. The results indicate that functionalized SWCNTs could be a promising delivery vehicle to potentiate the immune response of recombinant vaccines, and might be used to vaccinate juvenile fish by bath administration method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Measurement of air exchange rates in different indoor environments using continuous CO2 sensors.
You, Yan; Niu, Can; Zhou, Jian; Liu, Yating; Bai, Zhipeng; Zhang, Jiefeng; He, Fei; Zhang, Nan
2012-01-01
A new air exchange rate (AER) monitoring method using continuous CO2 sensors was developed and validated through both laboratory experiments and field studies. Controlled laboratory simulation tests were conducted in a 1-m3 environmental chamber at different AERs (0.1-10.0 hr(-1)). AERs were determined using the decay method based on box model assumptions. Field tests were conducted in classrooms, dormitories, meeting rooms and apartments during 2-5 weekdays using CO2 sensors coupled with data loggers. Indoor temperature, relative humidity (RH), and CO2 concentrations were continuously monitored while outdoor parameters combined with on-site climate conditions were recorded. Statistical results indicated that good laboratory performance was achieved: duplicate precision was within 10%, and the measured AERs were 90%-120% of the real AERs. Average AERs were 1.22, 1.37, 1.10, 1.91 and 0.73 hr(-1) in dormitories, air-conditioned classrooms, classrooms with an air circulation cooling system, reading rooms, and meeting rooms, respectively. In an elderly particulate matter exposure study, all the homes had AER values ranging from 0.29 to 3.46 hr(-1) in fall, and 0.12 to 1.39 hr(-1) in winter with a median AER of 1.15.
A review of air exchange rate models for air pollution exposure assessments.
Breen, Michael S; Schultz, Bradley D; Sohn, Michael D; Long, Thomas; Langstaff, John; Williams, Ronald; Isaacs, Kristin; Meng, Qing Yu; Stallings, Casson; Smith, Luther
2014-11-01
A critical aspect of air pollution exposure assessments is estimation of the air exchange rate (AER) for various buildings where people spend their time. The AER, which is the rate of exchange of indoor air with outdoor air, is an important determinant for entry of outdoor air pollutants and for removal of indoor-emitted air pollutants. This paper presents an overview and critical analysis of the scientific literature on empirical and physically based AER models for residential and commercial buildings; the models highlighted here are feasible for exposure assessments as extensive inputs are not required. Models are included for the three types of airflows that can occur across building envelopes: leakage, natural ventilation, and mechanical ventilation. Guidance is provided to select the preferable AER model based on available data, desired temporal resolution, types of airflows, and types of buildings included in the exposure assessment. For exposure assessments with some limited building leakage or AER measurements, strategies are described to reduce AER model uncertainty. This review will facilitate the selection of AER models in support of air pollution exposure assessments.
Chloride Ion Adsorption Capacity of Anion Exchange Resin in Cement Mortar
Lee, Hanseung; Jung, Dohyun; Chen, Zhengxin
2018-01-01
This paper presents the effect of anion exchange resin (AER) on the adsorption of chloride ions in cement mortar. The kinetic and equilibrium behaviors of AER were investigated in distilled water and Ca(OH)2 saturated solutions, and then the adsorption of chloride ions by the AER in the mortar specimen was determined. The AER was used as a partial replacement for sand in the mortar specimen. The mortar specimen was coated with epoxy, except for an exposed surface, and then immersed in a NaCl solution for 140 days. The chloride content in the mortar specimen was characterized by energy dispersive X-ray fluorescence analysis and electron probe microanalysis. The results showed that the AER could adsorb the chloride ions from the solution rapidly but had a relatively low performance when the pH of its surrounding environment increased. When the AER was mixed in the cement mortar, its chloride content was higher than that of the cement matrix around it, which confirms the chloride ion adsorption capacity of the AER. PMID:29621188
An introduction to analytical methods for the postmarketing surveillance of veterinary vaccines.
Siev, D
1999-01-01
Any analysis of spontaneous AER data must consider the many biases inherent in the observation and reporting of vaccine adverse events. The absence of a clear probability structure requires statistical procedures to be used in a spirit of exploratory description rather than definitive confirmation. The extent of such descriptions should be temperate, without the implication that they extend to parent populations. It is important to recognize the presence of overdispersion in selecting methods and constructing models. Important stochastic or systematic features of the data may always be unknown. Our attempts to delineate what constitutes an AER have not eliminated all the fuzziness in its definition. Some count every event in a report as a separate AER. Besides confusing the role of event and report, this introduces a complex correlational structure, since multiple event descriptions received in a single report can hardly be considered independent. The many events described by one reporter would then become inordinately weighted. The alternative is to record an AER once, regardless of how many event descriptions it includes. As a practical compromise, many regard the simultaneous submission of several report forms by one reporter as a single AER, and the next submission by that reporter as another AER. This method is reasonable when reporters submit AERs very infrequently. When individual reporters make frequent reports, it becomes difficult to justify the inconsistency of counting multiple events as a single AER when they are submitted together, but as separate AERs when they are reported at different times. While either choice is imperfect, the latter approach is currently used by the USDA and its licensed manufacturers in developing a mandatory postmarketing surveillance system for veterinary immunobiologicals in the United States. Under the proposed system, summaries of an estimated 10,000 AERs received annually by the manufacturers would be submitted to the USDA. In quantitative summaries, AERs received from lay consumers are usually weighted equally with those received from veterinary health professionals, although arguments have been advanced for separate classifications. The emphasis on AER rate estimation differentiates the surveillance of veterinary vaccines by the USDA CVB from the surveillance of veterinary drugs as practiced by the Food and Drug Administration (FDA) Center for Veterinary Medicine (CVM). The FDA CVM does, in fact, perform a retrodictive causality assessment for individual AERs (Parkhie et al., 1995). This distinction reflects the differences between vaccines and drugs, as well as the difference in regulatory philosophy between the FDA and the USDA. The modified Kramer algorithm (Kramer et al., 1979) used by the FDA relies on features more appropriate to drug therapy than vaccination, such as an ongoing treatment regimen which allows evaluation of the response to dechallenge and rechallenge. In tracking AERs, the FDA has emphasized the inclusion of clinical manifestations on labels and inserts, while the USDA has been reluctant to have such information appear in product literature or to use postmarketing data for this purpose. The potential for the misuse of spontaneous AER data is great. Disinformation is likely when the nature of this type of data is misunderstood and inappropriate analytical methods blindly employed. A greater danger lies in the glib transformation of AER data into something else entirely. Since approval before publication is not required, advertisements for veterinary vaccines appear with claims such as "over 3 million doses, 99.9905% satisfaction rating," or "11,500,000 doses, 99.98% reaction free." These claims, presumably based on spontaneous AERs, are almost fraudulent in their deceptiveness. Are we to suppose that 11.5 million vaccinations were observed for reactions? In comparing the two advertisements, we find the second presumed AER rate is double the first. (ABSTRACT TRU
Aerobic and Strength Training in Concomitant Metabolic Syndrome and Type 2 Diabetes
Earnest, Conrad P.; Johannsen, Neil M.; Swift, Damon L.; Gillison, Fiona B.; Mikus, Catherine R.; Lucia, Alejandro; Kramer, Kimberly; Lavie, Carl J.; Church, Timothy S.
2014-01-01
Purpose Concomitant type 2 diabetes (T2D) and metabolic syndrome exacerbates mortality risk; yet, few studies have examined the effect of combining (AER+RES) aerobic (AER) and resistance (RES) training for individuals with T2D and metabolic syndrome. Methods We examined AER, RES, and AER+RES training (9-months) commensurate with physical activity guidelines in individuals with T2D (N=262, 63% female, 44% black). Primary outcomes were change in, and prevalence of, metabolic syndrome score at follow-up (mean, 95%CI). Secondary outcomes included maximal cardiorespiratory fitness (VO2peak and estimated METs from time-to-exhaustion (TTE), and exercise efficiency calculated as the slope of the line between ventilatory threshold, respiratory compensation, and maximal fitness. General linear models and bootstrapped Spearman correlations were used to examine changes in metabolic syndrome associated with training primary and secondary outcome variables. Results We observed a significant decrease in metabolic syndrome scores (P-for-trend, 0.003) for AER (−0.59, 95%CI, −1.00, −0.21) and AER+RES (−0.79, 95%CI, −1.40, −0.35), both being significant (P < 0.02) vs. Control (0.26, 95%CI, −0.58, 0.40) and RES (−0.13, 95%CI, −1.00, 0.24). This lead to a reduction in metabolic syndrome prevalence for the AER (56% vs. 43%) and AER+RES (55% vs. 46%) groups between baseline and follow-up. The observed decrease in metabolic syndrome was mediated by significant improvements in exercise efficiency for the AER and AER+RES training groups (P<0.05), which was more strongly related to TTE (25–30%; r= −0.38; 95% CI: −0.55, −0.19) than VO2peak (5–6%; r= −0.24; 95% CI: −0.45, −0.01). Conclusion Aerobic and AER+RES training significantly improves metabolic syndrome scores and prevalence in patients with T2D. These improvements appear to be associated with improved exercise efficiency and are more strongly related to improved TTE versus VO2peak. PMID:24389523
Verification of a neutronic code for transient analysis in reactors with Hex-z geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez-Pintor, S.; Verdu, G.; Ginestar, D.
Due to the geometry of the fuel bundles, to simulate reactors such as VVER reactors it is necessary to develop methods that can deal with hexagonal prisms as basic elements of the spatial discretization. The main features of a code based on a high order finite element method for the spatial discretization of the neutron diffusion equation and an implicit difference method for the time discretization of this equation are presented and the performance of the code is tested solving the first exercise of the AER transient benchmark. The obtained results are compared with the reference results of the benchmarkmore » and with the results provided by PARCS code. (authors)« less
Development of a Graphics Based Automated Emergency Response System (AERS) for Rail Transit Systems
DOT National Transportation Integrated Search
1989-05-01
This report presents an overview of the second generation Automated Emergency Response System (AERS2). Developed to assist transit systems in responding effectively to emergency situations, AERS2 is a microcomputer-based information retrieval system ...
Funk, S E; Reaven, N L
2014-04-01
The use of flexible endoscopes is growing rapidly around the world. Dominant approaches to high-level disinfection among resource-constrained countries include fully manual cleaning and disinfection and the use of automated endoscope reprocessors (AERs). Suboptimal reprocessing at any step can potentially lead to contamination, with consequences to patients and healthcare systems. To compare the potential results of guideline-recommended AERs to manual disinfection along three dimensions - productivity, need for endoscope repair, and infection transmission risk in India, China, and Russia. Financial modelling using data from peer-reviewed published literature and country-specific market research. In countries where revenue can be gained through productivity improvements, conversion to automated reprocessing has a positive direct impact on financial performance, paying back the capital investment within 14 months in China and seven months in Russia. In India, AER-generated savings and revenue offset nearly all of the additional operating costs needed to support automated reprocessing. Among endoscopy facilities in India and China, current survey-reported practices in endoscope reprocessing using manual soaking may place patients at risk of exposure to pathogens leading to infections. Conversion from manual soak to use of AERs, as recommended by the World Gastroenterology Organization, may generate cost and revenue offsets that could produce direct financial gains for some endoscopy units in Russia and China. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Zhao, Bo; Ding, Ruoxi; Chen, Shoushun; Linares-Barranco, Bernabe; Tang, Huajin
2015-09-01
This paper introduces an event-driven feedforward categorization system, which takes data from a temporal contrast address event representation (AER) sensor. The proposed system extracts bio-inspired cortex-like features and discriminates different patterns using an AER based tempotron classifier (a network of leaky integrate-and-fire spiking neurons). One of the system's most appealing characteristics is its event-driven processing, with both input and features taking the form of address events (spikes). The system was evaluated on an AER posture dataset and compared with two recently developed bio-inspired models. Experimental results have shown that it consumes much less simulation time while still maintaining comparable performance. In addition, experiments on the Mixed National Institute of Standards and Technology (MNIST) image dataset have demonstrated that the proposed system can work not only on raw AER data but also on images (with a preprocessing step to convert images into AER events) and that it can maintain competitive accuracy even when noise is added. The system was further evaluated on the MNIST dynamic vision sensor dataset (in which data is recorded using an AER dynamic vision sensor), with testing accuracy of 88.14%.
Time-recovering PCI-AER interface for bio-inspired spiking systems
NASA Astrophysics Data System (ADS)
Paz-Vicente, R.; Linares-Barranco, A.; Cascado, D.; Vicente, S.; Jimenez, G.; Civit, A.
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate 'events' according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. When building multi-chip muti-layered AER systems it is absolutely necessary to have a computer interface that allows (a) to read AER interchip traffic into the computer and visualize it on screen, and (b) inject a sequence of events at some point of the AER structure. This is necessary for testing and debugging complex AER systems. This paper presents a PCI to AER interface, that dispatches a sequence of events received from the PCI bus with embedded timing information to establish when each event will be delivered. A set of specialized states machines has been introduced to recovery the possible time delays introduced by the asynchronous AER bus. On the input channel, the interface capture events assigning a timestamp and delivers them through the PCI bus to MATLAB applications. It has been implemented in real time hardware using VHDL and it has been tested in a PCI-AER board, developed by authors, that includes a Spartan II 200 FPGA. The demonstration hardware is currently capable to send and receive events at a peak rate of 8,3 Mev/sec, and a typical rate of 1 Mev/sec.
Reichman, Rivka; Shirazi, Elham; Colliver, Donald G; Pennell, Kelly G
2017-02-22
Vapor intrusion (VI) is well-known to be difficult to characterize because indoor air (IA) concentrations exhibit considerable temporal and spatial variability in homes throughout impacted communities. To overcome this and other limitations, most VI science has focused on subsurface processes; however there is a need to understand the role of aboveground processes, especially building operation, in the context of VI exposure risks. This tutorial review focuses on building air exchange rates (AERs) and provides a review of literature related building AERs to inform decision making at VI sites. Commonly referenced AER values used by VI regulators and practitioners do not account for the variability in AER values that have been published in indoor air quality studies. The information presented herein highlights that seasonal differences, short-term weather conditions, home age and air conditioning status, which are well known to influence AERs, are also likely to influence IA concentrations at VI sites. Results of a 3D VI model in combination with relevant AER values reveal that IA concentrations can vary more than one order of magnitude due to air conditioning status and one order of magnitude due to house age. Collectively, the data presented strongly support the need to consider AERs when making decisions at VI sites.
Appropriate prediction of residential air exchange rate (AER) is important for estimating human exposures in the residential microenvironment, as AER drives the infiltration of outdoor-generated air pollutants indoors. AER differences among homes may result from a number of fact...
Coupling Processes Between Atmospheric Chemistry and Climate
NASA Technical Reports Server (NTRS)
Ko, M. K. W.; Weisenstein, Debra; Shia, Run-Li; Sze, N. D.
1997-01-01
This is the first semi-annual report for NAS5-97039 summarizing work performed for January 1997 through June 1997. Work in this project is related to NAS1-20666, also funded by NASA ACMAP. The work funded in this project also benefits from work at AER associated with the AER three-dimensional isentropic transport model funded by NASA AEAP and the AER two-dimensional climate-chemistry model (co-funded by Department of Energy). The overall objective of this project is to improve the understanding of coupling processes between atmospheric chemistry and climate. Model predictions of the future distributions of trace gases in the atmosphere constitute an important component of the input necessary for quantitative assessments of global change. We will concentrate on the changes in ozone and stratospheric sulfate aerosol, with emphasis on how ozone in the lower stratosphere would respond to natural or anthropogenic changes. The key modeling tools for this work are the AER two-dimensional chemistry-transport model, the AER two-dimensional stratospheric sulfate model, and the AER three-wave interactive model with full chemistry.
Clemente-Suárez, Vicente Javier; Dalamitros, Athanasios; Ribeiro, João; Sousa, Ana; Fernandes, Ricardo J; Vilas-Boas, J Paulo
2017-05-01
This study analysed the effects of two different periodization strategies on physiological parameters at various exercise intensities in competitive swimmers. Seventeen athletes of both sexes were divided to two groups, the traditional periodization (TPG, n = 7) and the reverse periodization group (RPG, n = 10). Each group followed a 10-week training period based on the two different periodization strategies. Before and after training, swimming velocity (SV), energy expenditure (EE), energy cost (EC) and percentage of aerobic (%Aer) and anaerobic (%An) energy contribution to the swimming intensities corresponding to the aerobic threshold (AerT), the anaerobic threshold (AnT) and the velocity at maximal oxygen uptake (vVO 2 max) were measured. Both groups increased the %An at the AerT and AnT intensity (P ≤ .05). In contrast, at the AnT intensity, EE and EC were only increased in TPG. Complementary, %Aer, %An, EE and EC at vVO 2 max did not alter in both groups (P > .05); no changes were observed in SV in TPG and RPG at all three intensities. These results indicate that both periodization schemes confer almost analogous adaptations in specific physiological parameters in competitive swimmers. However, given the large difference in the total training volume between the two groups, it is suggested that the implementation of the reverse periodization model is an effective and time-efficient strategy to improve performance mainly for swimming events where the AnT is an important performance indicator.
2012-01-01
Background The instrument channels of gastrointestinal (GI) endoscopes may be heavily contaminated with bacteria even after high-level disinfection (HLD). The British Society of Gastroenterology guidelines emphasize the benefits of manually brushing endoscope channels and using automated endoscope reprocessors (AERs) for disinfecting endoscopes. In this study, we aimed to assess the effectiveness of decontamination using reprocessors after HLD by comparing the cultured samples obtained from biopsy channels (BCs) of GI endoscopes and the internal surfaces of AERs. Methods We conducted a 5-year prospective study. Every month random consecutive sampling was carried out after a complete reprocessing cycle; 420 rinse and swabs samples were collected from BCs and internal surface of AERs, respectively. Of the 420 rinse samples collected from the BC of the GI endoscopes, 300 were obtained from the BCs of gastroscopes and 120 from BCs of colonoscopes. Samples were collected by flushing the BCs with sterile distilled water, and swabbing the residual water from the AERs after reprocessing. These samples were cultured to detect the presence of aerobic and anaerobic bacteria and mycobacteria. Results The number of culture-positive samples obtained from BCs (13.6%, 57/420) was significantly higher than that obtained from AERs (1.7%, 7/420). In addition, the number of culture-positive samples obtained from the BCs of gastroscopes (10.7%, 32/300) and colonoscopes (20.8%, 25/120) were significantly higher than that obtained from AER reprocess to gastroscopes (2.0%, 6/300) and AER reprocess to colonoscopes (0.8%, 1/120). Conclusions Culturing rinse samples obtained from BCs provides a better indication of the effectiveness of the decontamination of GI endoscopes after HLD than culturing the swab samples obtained from the inner surfaces of AERs as the swab samples only indicate whether the AERs are free from microbial contamination or not. PMID:22943739
Chiu, King-Wah; Tsai, Ming-Chao; Wu, Keng-Liang; Chiu, Yi-Chun; Lin, Ming-Tzung; Hu, Tsung-Hui
2012-09-03
The instrument channels of gastrointestinal (GI) endoscopes may be heavily contaminated with bacteria even after high-level disinfection (HLD). The British Society of Gastroenterology guidelines emphasize the benefits of manually brushing endoscope channels and using automated endoscope reprocessors (AERs) for disinfecting endoscopes. In this study, we aimed to assess the effectiveness of decontamination using reprocessors after HLD by comparing the cultured samples obtained from biopsy channels (BCs) of GI endoscopes and the internal surfaces of AERs. We conducted a 5-year prospective study. Every month random consecutive sampling was carried out after a complete reprocessing cycle; 420 rinse and swabs samples were collected from BCs and internal surface of AERs, respectively. Of the 420 rinse samples collected from the BC of the GI endoscopes, 300 were obtained from the BCs of gastroscopes and 120 from BCs of colonoscopes. Samples were collected by flushing the BCs with sterile distilled water, and swabbing the residual water from the AERs after reprocessing. These samples were cultured to detect the presence of aerobic and anaerobic bacteria and mycobacteria. The number of culture-positive samples obtained from BCs (13.6%, 57/420) was significantly higher than that obtained from AERs (1.7%, 7/420). In addition, the number of culture-positive samples obtained from the BCs of gastroscopes (10.7%, 32/300) and colonoscopes (20.8%, 25/120) were significantly higher than that obtained from AER reprocess to gastroscopes (2.0%, 6/300) and AER reprocess to colonoscopes (0.8%, 1/120). Culturing rinse samples obtained from BCs provides a better indication of the effectiveness of the decontamination of GI endoscopes after HLD than culturing the swab samples obtained from the inner surfaces of AERs as the swab samples only indicate whether the AERs are free from microbial contamination or not.
A polishing hybrid AER/UF membrane process for the treatment of a high DOC content surface water.
Humbert, H; Gallard, H; Croué, J-P
2012-03-15
The efficacy of a combined AER/UF (Anion Exchange Resin/Ultrafiltration) process for the polishing treatment of a high DOC (Dissolved Organic Carbon) content (>8 mgC/L) surface water was investigated at lab-scale using a strong base AER. Both resin dose and bead size had a significant impact on the kinetic removal of DOC for short contact times (i.e. <15 min). For resin doses higher than 700 mg/L and median bead sizes below 250 μm DOC removal remained constant after 30 min of contact time with very high removal rates (80%). Optimum AER treatment conditions were applied in combination with UF membrane filtration on water previously treated by coagulation-flocculation (i.e. 3 mgC/L). A more severe fouling was observed for each filtration run in the presence of AER. This fouling was shown to be mainly reversible and caused by the progressive attrition of the AER through the centrifugal pump leading to the production of resin particles below 50 μm in diameter. More important, the presence of AER significantly lowered the irreversible fouling (loss of permeability recorded after backwash) and reduced the DOC content of the clarified water to l.8 mgC/L (40% removal rate), concentration that remained almost constant throughout the experiment. Copyright © 2011 Elsevier Ltd. All rights reserved.
Felicidade, I; Lima, J D; Pesarini, J R; Monreal, A C D; Mantovani, M S; Ribeiro, L R; Oliveira, R J
2014-11-28
Polyphenolic compounds present in rosemary were found to have antioxidant properties, anticarcinogenic activity, and to increase the detoxification of pro-carcinogens. The aim of the study was to determine the effect the aqueous extract of rosemary (AER) on mutagenicity induced by methylmethane sulfonate in meristematic cells of Allium cepa, as well as to describe its mode of action. Anti-mutagenicity experiments were carried out with 3 different concentrations of AER, which alone showed no mutagenic effects. In antimutagenicity experiments, AER showed chemopreventive activity in cultured meristematic cells of A. cepa against exposure to methylmethane sulfonate. Additionally, post-treatment and simultaneous treatment using pre-incubation protocols were the most effective. Evaluation of different protocols and the percent reduction in DNA indicated bioantimutagenic as well desmutagenic modes of action for AER. AER may be chemopreventive and antimutagenic.
Growth of Aeromonas species on increasing concentrations of sodium chloride.
Delamare, A P; Costa, S O; Da Silveira, M M; Echeverrigaray, S
2000-01-01
The growth of 16 strains of Aeromonas, representing 12 species of the genera, were examined at different salt levels (0-1.71 M NaCl). All the strains grew on media with 0.34 M NaCl, and nine on media with 0.68 M. Two strains, Aer. enteropelogenes and Aer. trota, were able to grow on media with 0.85 M and 1.02 M NaCl, respectively. Comparison of the growth curves of Aer. hydrophila ATCC7966 and Aer. trota ATCC 49657 on four concentrations of NaCl (0.08, 0.34, 0.68 and 1.02 M) confirm the high tolerance of Aer. trota, and indicate that high concentrations of salt increase the lag time and decrease the maximum growth rate. However, both strains were able to grow, slowly, in at least 0.68 M NaCl, a sodium chloride concentration currently used as food preservative.
Webb, D J; Newman, D J; Chaturvedi, N; Fuller, J H
1996-03-01
In IDDM, microalbuminuria (urinary albumin excretion rate (AER) of 20-200 micrograms/min) is a predictor of persistent proteinuria and diabetic nephropathy. Early intervention may prevent or reduce the rate of progression of renal complications. The Micral-Test strip can be used to establish a semi-quantitative estimate of AER. We assessed the field performance of the Micral-Test strip in detecting microalbuminuria in the EUCLID study, an European wide, 18 centre study of 530 IDDM participants, aged 20 to 59 years. People with macroalbuminuria were excluded. On entry, all participants had albumin concentrations from two overnight urine collections measured by a central laboratory, and the corresponding Micral-Test performed on the two collections locally. a cut off of > or = mg/l albumin from the first Micral-Test, to detect a centrally measured albumin concentration > or = 20 mg/l, yielded 29 (5.8%) false negative results and 58 (11.6%) false positive results (sensitivity 70%, specificity 87%). The mean AER, from two collections, was compared with the corresponding 'pooled' Micral-Test results (mean of the two readings). Receiver Operating Characteristic (ROC) curves were used to assess if there was a suitable 'pooled' Micral-Test result for screening microalbuminuria. A 'pooled' Micral-Test result (> or = 15 mg/l) was used to detect mean AER > or = 20 micrograms/min (sensitivity 78%, specificity 77%). This 'pooled cut-off' had already been used for screening on to the study and led to an over-estimate (154 vs. 77) of the true number of microalbuminuric participants on the study. In conclusion, our findings suggest that the Micral-Test strip is not an effective screening tool for microalbuminuria, using the 'pooled' result from two measurements did not improve the sensitivity of the test.
Co-Production of Quality in the Applied Education Research Scheme
ERIC Educational Resources Information Center
Ozga, Jenny
2007-01-01
This contribution looks at the ways in which research quality is defined and addressed in the Applied Education Research Scheme (AERS), particularly within the network on Schools and Social Capital, which is one of the four areas of work within the overall AERS scheme. AERS is a five-year programme, funded jointly by the Scottish Executive and the…
Anion-exchange resins (AERs) separate As(V) and As(lIl) in solution by retaining As(V) and allowing As(lIl) to pass through. AERs offer several advantages including portability, ease of use, and affordability (relative to other As speciation methods). The use of AERs for the inst...
Serrano-Gotarredona, Rafael; Oster, Matthias; Lichtsteiner, Patrick; Linares-Barranco, Alejandro; Paz-Vicente, Rafael; Gomez-Rodriguez, Francisco; Camunas-Mesa, Luis; Berner, Raphael; Rivas-Perez, Manuel; Delbruck, Tobi; Liu, Shih-Chii; Douglas, Rodney; Hafliger, Philipp; Jimenez-Moreno, Gabriel; Civit Ballcels, Anton; Serrano-Gotarredona, Teresa; Acosta-Jimenez, Antonio J; Linares-Barranco, Bernabé
2009-09-01
This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asychronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45k neurons (spiking cells), up to 5M synapses, performs 12G synaptic operations per second, and achieves millisecond object recognition and tracking latencies.
Address-event-based platform for bioinspired spiking systems
NASA Astrophysics Data System (ADS)
Jiménez-Fernández, A.; Luján, C. D.; Linares-Barranco, A.; Gómez-Rodríguez, F.; Rivas, M.; Jiménez, G.; Civit, A.
2007-05-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows a real-time virtual massive connectivity between huge number neurons, located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate "events" according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. When building multi-chip muti-layered AER systems, it is absolutely necessary to have a computer interface that allows (a) reading AER interchip traffic into the computer and visualizing it on the screen, and (b) converting conventional frame-based video stream in the computer into AER and injecting it at some point of the AER structure. This is necessary for test and debugging of complex AER systems. In the other hand, the use of a commercial personal computer implies to depend on software tools and operating systems that can make the system slower and un-robust. This paper addresses the problem of communicating several AER based chips to compose a powerful processing system. The problem was discussed in the Neuromorphic Engineering Workshop of 2006. The platform is based basically on an embedded computer, a powerful FPGA and serial links, to make the system faster and be stand alone (independent from a PC). A new platform is presented that allow to connect up to eight AER based chips to a Spartan 3 4000 FPGA. The FPGA is responsible of the network communication based in Address-Event and, at the same time, to map and transform the address space of the traffic to implement a pre-processing. A MMU microprocessor (Intel XScale 400MHz Gumstix Connex computer) is also connected to the FPGA to allow the platform to implement eventbased algorithms to interact to the AER system, like control algorithms, network connectivity, USB support, etc. The LVDS transceiver allows a bandwidth of up to 1.32 Gbps, around ~66 Mega events per second (Mevps).
NASA Astrophysics Data System (ADS)
Wolff, S.; Fraknoi, A.; Hockey, T.; Biemesderfer, C.; Johnson, J.
2010-08-01
Astronomy Education Review (AER) is an online journal and magazine, covering astronomy and space science education and outreach. Founded in 2001 by Andrew Fraknoi and Sidney Wolff, and published until recently by National Optical Astronomy Observatories (NOAO), the journal is now a proud part of the journals operation of the American Astronomical Society (AAS) found online at http://aer.aip.org. If you are presenting at this conference, or reading the conference proceedings, you may be an ideal candidate to publish in AER. Later in this paper, we present some encouraging hints and guidelines for publishing in the journal.
AerChemMIP: Quantifying the effects of chemistry and aerosols in CMIP6
Collins, William J.; Lamarque, Jean -François; Schulz, Michael; ...
2017-02-09
The Aerosol Chemistry Model Intercomparison Project (AerChemMIP) is endorsed by the Coupled-Model Intercomparison Project 6 (CMIP6) and is designed to quantify the climate and air quality impacts of aerosols and chemically reactive gases. These are specifically near-term climate forcers (NTCFs: methane, tropospheric ozone and aerosols, and their precursors), nitrous oxide and ozone-depleting halocarbons. The aim of AerChemMIP is to answer four scientific questions. 1. How have anthropogenic emissions contributed to global radiative forcing and affected regional climate over the historical period? 2. How might future policies (on climate, air quality and land use) affect the abundances of NTCFs and theirmore » climate impacts? 3.How do uncertainties in historical NTCF emissions affect radiative forcing estimates? 4. How important are climate feedbacks to natural NTCF emissions, atmospheric composition, and radiative effects? These questions will be addressed through targeted simulations with CMIP6 climate models that include an interactive representation of tropospheric aerosols and atmospheric chemistry. These simulations build on the CMIP6 Diagnostic, Evaluation and Characterization of Klima (DECK) experiments, the CMIP6 historical simulations, and future projections performed elsewhere in CMIP6, allowing the contributions from aerosols and/or chemistry to be quantified. As a result, specific diagnostics are requested as part of the CMIP6 data request to highlight the chemical composition of the atmosphere, to evaluate the performance of the models, and to understand differences in behaviour between them.« less
AerChemMIP: Quantifying the effects of chemistry and aerosols in CMIP6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, William J.; Lamarque, Jean -François; Schulz, Michael
The Aerosol Chemistry Model Intercomparison Project (AerChemMIP) is endorsed by the Coupled-Model Intercomparison Project 6 (CMIP6) and is designed to quantify the climate and air quality impacts of aerosols and chemically reactive gases. These are specifically near-term climate forcers (NTCFs: methane, tropospheric ozone and aerosols, and their precursors), nitrous oxide and ozone-depleting halocarbons. The aim of AerChemMIP is to answer four scientific questions. 1. How have anthropogenic emissions contributed to global radiative forcing and affected regional climate over the historical period? 2. How might future policies (on climate, air quality and land use) affect the abundances of NTCFs and theirmore » climate impacts? 3.How do uncertainties in historical NTCF emissions affect radiative forcing estimates? 4. How important are climate feedbacks to natural NTCF emissions, atmospheric composition, and radiative effects? These questions will be addressed through targeted simulations with CMIP6 climate models that include an interactive representation of tropospheric aerosols and atmospheric chemistry. These simulations build on the CMIP6 Diagnostic, Evaluation and Characterization of Klima (DECK) experiments, the CMIP6 historical simulations, and future projections performed elsewhere in CMIP6, allowing the contributions from aerosols and/or chemistry to be quantified. As a result, specific diagnostics are requested as part of the CMIP6 data request to highlight the chemical composition of the atmosphere, to evaluate the performance of the models, and to understand differences in behaviour between them.« less
Vågstrand, Karin; Lindroos, Anna Karin; Linné, Yvonne
2009-02-01
To describe the differences in socio-economic characteristics and body measurements between low, adequate and high energy reporting (LER, AER and HER) teenagers; furthermore, to investigate the relationship to misreporting mothers. Cross-sectional study. Habitual dietary intake was reported in a questionnaire. Classification into LER, AER and HER using the Goldberg equation within three activity groups based on physical activity questionnaire and calculated BMR. Stockholm, Sweden. Four hundred and forty-one 16-17-year-old teenagers (57 % girls) and their mothers. Of the teenagers, 17-19 % were classified as HER, while 13-16 % as LER. There was a highly significant trend from HER to LER in BMI (P < 0.001) and body fat % (P < 0.001). There was also a trend in number of working hours of mother (P = 0.01), family income (P = 0.008) and number of siblings (among boys only) (P = 0.02), but not in educational level of either father or mother. HER teenagers were lean, had mothers working fewer hours with lower income and had siblings. It was more likely that an LER girl had an LER mother than an AER mother (OR = 3.32; P = 0.002). The reasons for the high number of over-reporters could be many: misclassification due to growth, lacking established eating pattern due to young age or method-specific. Nevertheless, the inverted characteristic of HER compared to LER indicates that this is a specific group, worth further investigation.
NASA Astrophysics Data System (ADS)
Yamada, A.; Saitoh, N.; Nonogaki, R.; Imasu, R.; Shiomi, K.; Kuze, A.
2016-12-01
The thermal infrared (TIR) band of Thermal and Near-infrared Sensor for Carbon Observation Fourier Transform Spectrometer (TANSO-FTS) onboard Greenhouse Gases Observing Satellite (GOSAT) observes CH4 profile at wavenumber range from 1210 cm-1 to 1360 cm-1 including CH4 ν4 band. The current retrieval algorithm (V1.0) uses LBLRTM V12.1 with AER V3.1 line database to calculate optical depth. LBLRTM V12.1 include MT_CKD 2.5.2 model to calculate continuum absorption. The continuum absorption has large uncertainty, especially temperature dependent coefficient, between BPS model and MT_CKD model in the wavenumber region of 1210-1250 cm-1(Paynter and Ramaswamy, 2014). The purpose of this study is to assess the impact on CH4 retrieval from the line parameter databases and the uncertainty of continuum absorption. We used AER v1.0 database, HITRAN2004 database, HITRAN2008 database, AER V3.2 database, and HITRAN2012 database (Rothman et al. 2005, 2009, and 2013. Clough et al., 2005). AER V1.0 database is based on HITRAN2000. The CH4 line parameters of AER V3.1 and V3.2 databases are developed from HITRAN2008 including updates until May 2009 with line mixing parameters. We compared the retrieved CH4 with the HIPPO CH4 observation (Wofsy et al., 2012). The difference of AER V3.2 was the smallest and 24.1 ± 45.9 ppbv. The differences of AER V1.0, HITRAN2004, HITRAN2008, and HITRAN2012 were 35.6 ± 46.5 ppbv, 37.6 ± 46.3 ppbv, 32.1 ± 46.1 ppbv, and 35.2 ± 46.0 ppbv, respectively. Compare AER V3.2 case to HITRAN2008 case, the line coupling effect reduced difference by 8.0 ppbv. Median values of Residual difference from HITRAN2008 to AER V1.0, HITRAN2004, AER V3.2, and HITRAN2012 were 0.6 K, 0.1 K, -0.08 K, and 0.08 K, respectively, while median values of transmittance difference were less than 0.0003 and transmittance differences have small wavenumber dependence. We also discuss the retrieval error from the uncertainty of the continuum absorption, the test of full grid configuration for retrieval, and the retrieval results using GOSAT TIR L1B V203203, which are sample products to evaluate the next level 1B algorithm.
Panduru, Nicolae M.; Forsblom, Carol; Saraheimo, Markku; Thorn, Lena; Bierhaus, Angelika; Humpert, Per M.; Groop, Per-Henrik
2013-01-01
OBJECTIVE Diabetic nephropathy (DN) has mainly been considered a glomerular disease, although tubular dysfunction may also play a role. This study assessed the predictive value for progression of a tubular marker, urinary liver-type fatty acid–binding protein (L-FABP), at all stages of DN. RESEARCH DESIGN AND METHODS At baseline, 1,549 patients with type 1 diabetes had an albumin excretion rate (AER) within normal reference ranges, 334 had microalbuminuria, and 363 had macroalbuminuria. Patients were monitored for a median of 5.8 years (95% CI 5.7–5.9). In addition, 208 nondiabetic subjects were studied. L-FABP was measured by ELISA and normalized with urinary creatinine. Different Cox proportional hazard models for the progression at every stage of DN were used to evaluate the predictive value of L-FABP. The potential benefit of using L-FABP alone or together with AER was assessed by receiver operating characteristic curve analyses. RESULTS L-FABP was an independent predictor of progression at all stages of DN. As would be expected, receiver operating characteristic curves for the prediction of progression were significantly larger for AER than for L-FABP, except for patients with baseline macroalbuminuria, in whom the areas were similar. Adding L-FABP to AER in the models did not significantly improve risk prediction of progression in favor of the combination of L-FABP plus AER compared with AER alone. CONCLUSIONS L-FABP is an independent predictor of progression of DN irrespective of disease stage. L-FABP used alone or together with AER may not improve the risk prediction of DN progression in patients with type 1 diabetes, but further studies are needed in this regard. PMID:23378622
Robertson, Boakai K; Harden, Carol; Selvaraju, Suresh B; Pradhan, Suman; Yadav, Jagjit S
2014-01-01
Aeromonas is ubiquitous in aquatic environments and has been associated with a number of extra-gastrointestinal and gastrointestinal illnesses. This warrants monitoring of raw and processed water sources for pathogenic and toxigenic species of this human pathogen. In this study, a total of 17 different water samples [9 raw and 8 treated samples including 4 basin water (partial sand filtration) and 4 finished water samples] were screened for Aeromonas using selective culturing and a genus-specific real-time quantitative PCR assay. The selective culturing yielded Aeromonas counts ranging 0 – 2 x 103CFU/ml and 15 Aeromonas isolates from both raw and treated water samples. The qPCR analysis indicated presence of a considerable nonculturable population (3.4 x 101 – 2.4 x 104 cells/ml) of Aeromonas in drinking water samples. Virulence potential of the Aeromonas isolates was assessed by multiplex/singleplex PCR-based profiling of the hemolysin and enterotoxin genes viz cytotoxic heat-labile enterotoxin (act), heat-labile cytotonic enterotoxin (alt), heat-stable cytotonic enterotoxin (ast), and aerolysin (aerA) genes. The water isolates yielded five distinct toxigenicity profiles, viz. act, alt, act+alt, aerA+alt, and aerA+alt+act. The alt gene showed the highest frequency of occurrence (40%), followed by the aerA (20%), act (13%), and ast (0%) genes. Taken together, the study demonstrated the occurrence of a considerable population of nonculturable Aeromonads in water and prevalence of toxigenic Aeromonas spp. potentially pathogenic to humans. This emphasizes the importance of routine monitoring of both source and drinking water for this human pathogen and role of the developed molecular approaches in improving the Aeromonas monitoring scheme for water. PMID:24949108
Efficacy of low-dose oral sulodexide in the management of diabetic nephropathy.
Blouza, Samira; Dakhli, Sabeur; Abid, Hafaoua; Aissaoui, Mohamed; Ardhaoui, Ilhem; Ben Abdallah, Nejib; Ben Brahim, Samir; Ben Ghorbel, Imed; Ben Salem, Nabila; Beji, Soumaya; Chamakhi, Said; Derbel, Adnene; Derouiche, Fethi; Djait, Faycal; Doghri, Taieb; Fourti, Yamina; Gharbi, Faycel; Jellouli, Kamel; Jellazi, Nabil; Kamoun, Kamel; Khedher, Adel; Letaief, Amel; Limam, Ridha; Mekaouer, Awatef; Miledi, Riadh; Nagati, Khemaies; Naouar, Meriem; Sellem, Sami; Tarzi, Hichem; Turki, Selma; Zidi, Borni; Achour, Abdellatif
2010-01-01
Diabetic nephropathy (DN) is the single greatest cause of end-stage renal disease (ESRD). Without specific interventions, microalbuminuria (incipient nephropathy) gradually progresses to macroalbuminuria (overt nephropathy) within 10-15 years in about 80% of type 1 and 30% of type 2 diabetic patients, and to ESRD within further 20 years in about 75% and 20%, respectively. A primary alteration in DN consists of decreased concentration of glycosaminoglycans (GAGs) in the glomerular extracellular matrix. This evidence has prompted interest in using exogenous GAGs and specifically sulodexide in DN treatment. In this uncontrolled multicenter study, diabetic patients with albumin excretion rate (AER) >or=30 mg/24 hours were treated with oral sulodexide 50 mg/day for 6 months, while receiving concomitant medication as required. Two hundred thirty-seven patients (54% males and 46% females, mean age 55 years, mean diabetes duration 11 years) were evaluated; 89% had type 2 and 11% type 1 diabetes mellitus, 67% microalbuminuria and 33% macroalbuminuria. AER was significantly and progressively reduced during sulodexide treatment (p<0.0001): geometric mean after 3 and 6 months was 63.7% (95% confidence interval [95% CI], 59.3%-68.4%) and 42.7% (95% CI, 37.8%-48.2%) of baseline, respectively. The reduction was similar in type 1 and type 2 diabetes and was slightly greater in macroalbuminuric than in microalbuminuric patients. Blood pressure was slightly lowered, while fasting glucose and glycosylated hemoglobin were moderately reduced. Adverse effects were observed in 5.5% of patients, including gastrointestinal in 3.8%. Sulodexide therapy was shown to reduce AER in patients with DN.
Atmospheric Science Data Center
2018-04-23
DSCOVR_EPIC_L2_AER_01 The Aerosol UV product provides aerosol and UV products in three tiers. Tier 1 products include Absorbing Aerosol Index (AAI) and above-cloud-aerosol optical depth (ACAOD). Tier 2 ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seiferlein, Katherine E.
The Annual Energy Review (AER) presents the Energy Information Administration’s historical energy statistics. For many series, statistics are given for every year from 1949 through 2002. The statistics, expressed in either physical units or British thermal units, cover all major energy activities, including consumption, production, trade, stocks, and prices, for all major energy commodities, including fossil fuels, electricity, and renewable energy sources. Publication of this report is required under Public Law 95–91 (Department of Energy Organization Act), Section 205(c), and is in keeping with responsibilities given to the Energy Information Administration (EIA) under Section 205(a)(2), which states: “The Administrator shallmore » be responsible for carrying out a central, comprehensive, and unified energy data and information program which will collect, evaluate, assemble, analyze, and disseminate data and information....” The AER is intended for use by Members of Congress, Federal and State agencies, energy analysts, and the general public. EIA welcomes suggestions from readers regarding data series in the AER and in other EIA publications. Related Publication: Readers of the AER may also be interested in EIA’s Monthly Energy Review, which presents monthly updates of many of the data in the AER. Contact our National Energy Information Center for more information.« less
Peng, Hui; Ma, Guofu; Sun, Kanjun; Mu, Jingjing; Zhang, Zhe; Lei, Ziqiang
2014-12-10
Two-dimensional mesoporous carbon nanosheets (CNSs) have been prepared via simultaneous activation and catalytic carbonization route using macroporous anion-exchange resin (AER) as carbon precursor and ZnCl2 and FeCl3 as activating agent and catalyst, respectively. The iron catalyst in the skeleton of the AER may lead to carburization to form a sheetlike structure during the carbonization process. The obtained CNSs have a large number of mesopores, a maximum specific surface area of 1764.9 m(2) g(-1), and large pore volume of 1.38 cm(3) g(-1). As an electrode material for supercapacitors application, the CNSs electrode possesses a large specific capacitance of 283 F g(-1) at 0.5 A g(-1) and excellent rate capability (64% retention ratio even at 50 A g(-1)) in 6 mol L(-1) KOH. Furthermore, CNSs symmetric supercapacitor exhibits specific energies of 17.2 W h kg(-1) at a power density of 224 W kg(-1) operated in the voltage range of 0-1.8 V in 0.5 mol L(-1) Na2SO4 aqueous electrolyte, and outstanding cyclability (retains about 96% initial capacitance after 5000 cycles).
A quasi-experimental study of after-event reviews and leadership development.
Derue, D Scott; Nahrgang, Jennifer D; Hollenbeck, John R; Workman, Kristina
2012-09-01
We examine how structured reflection through after-event reviews (AERs) promotes experience-based leadership development and how people's prior experiences and personality attributes influence the impact of AERs on leadership development. We test our hypotheses in a time-lagged, quasi-experimental study that followed 173 research participants for 9 months and across 4 distinct developmental experiences. Findings indicate that AERs have a positive effect on leadership development, and this effect is accentuated when people are conscientious, open to experience, and emotionally stable and have a rich base of prior developmental experiences.
Experimental evaluation of the power balance model of speed skating.
de Koning, Jos J; Foster, Carl; Lampen, Joanne; Hettinga, Floor; Bobbert, Maarten F
2005-01-01
Prediction of speed skating performance with a power balance model requires assumptions about the kinetics of energy production, skating efficiency, and skating technique. The purpose of this study was to evaluate these parameters during competitive imitations for the purpose of improving model predictions. Elite speed skaters (n = 8) performed races and submaximal efficiency tests. External power output (P(o)) was calculated from movement analysis and aerodynamic models and ice friction measurements. Aerobic kinetics was calculated from breath-by-breath oxygen uptake (Vo(2)). Aerobic power (P(aer)) was calculated from measured skating efficiency. Anaerobic power (P(an)) kinetics was determined by subtracting P(aer) from P(o). We found gross skating efficiency to be 15.8% (1.8%). In the 1,500-m event, the kinetics of P(an) was characterized by a first-order system as P(an) = 88 + 556e(-0.0494t) (in W, where t is time). The rate constant for the increase in P(aer) was -0.153 s(-1), the time delay was 8.7 s, and the peak P(aer) was 234 W; P(aer) was equal to 234[1 - e(-0.153(t-8.7))] (in W). Skating position changed with preextension knee angle increasing and trunk angle decreasing throughout the event. We concluded the pattern of P(aer) to be quite similar to that reported during other competitive imitations, with the exception that the increase in P(aer) was more rapid. The pattern of P(an) does not appear to fit an "all-out" pattern, with near zero values during the last portion of the event, as assumed in our previous model (De Koning JJ, de Groot G, and van Ingen Schenau GJ. J Biomech 25: 573-580, 1992). Skating position changed in ways different from those assumed in our previous model. In addition to allowing improved predictions, the results demonstrate the importance of observations in unique subjects to the process of model construction.
Combined anaerobic and aerobic digestion for increased solids reduction and nitrogen removal.
Novak, John T; Banjade, Sarita; Murthy, Sudhir N
2011-01-01
A unique sludge digestion system consisting of anaerobic digestion followed by aerobic digestion and then a recycle step where thickened sludge from the aerobic digester was recirculated back to the anaerobic unit was studied to determine the impact on volatile solids (VS) reduction and nitrogen removal. It was found that the combined anaerobic/aerobic/anaerobic (ANA/AER/ANA) system provided 70% VS reduction compared to 50% for conventional mesophilic anaerobic digestion with a 20 day SRT and 62% for combined anaerobic/aerobic (ANA/AER) digestion with a 15 day anaerobic and a 5 day aerobic SRT. Total Kjeldahl nitrogen (TKN) removal for the ANA/AER/ANA system was 70% for sludge wasted from the aerobic unit and 43.7% when wasted from the anaerobic unit. TKN removal was 64.5% for the ANA/AER system. Copyright © 2010 Elsevier Ltd. All rights reserved.
Edwards, Jessica C.; Johnson, Mark S.; Taylor, Barry L.
2007-01-01
SUMMARY Aerotaxis (oxygen-seeking) behavior in Escherichia coli is a response to changes in the electron transport system and not oxygen per se. Because changes in proton motive force (PMF) are coupled to respiratory electron transport, it is difficult to differentiate between PMF, electron transport or redox, all primary candidates for the signal sensed by the aerotaxis receptors, Aer and Tsr. We constructed electron transport mutants that produced different respiratory H+/e- stoichiometries. These strains expressed binary combinations of one NADH dehydrogenase and one quinol oxidase. We then introduced either an aer or tsr mutation into each mutant to create two sets of electron transport mutants. In vivo H+/e- ratios for strains grown in glycerol medium ranged from 1.46 ± 0.18 to 3.04 ± 0.47, but rates of respiration and growth were similar. The PMF jump in response to oxygen was proportional to the H+/e- ratio in each set of mutants (r2 = 0.986 to 0.996). The length of Tsr-mediated aerotaxis responses increased with the PMF jump (r2 = 0.988), but Aer-mediated responses did not correlate with either PMF changes (r2 = 0.297) or the rate of electron transport (r2 = 0.066). Aer-mediated responses were linked to NADH dehydrogenase I, although there was no absolute requirement. The data indicate that Tsr responds to changes in PMF, but strong Aer responses to oxygen are associated with redox changes in NADH dehydrogenase I PMID:16995896
Landry, Kelly A; Sun, Peizhe; Huang, Ching-Hua; Boyer, Treavor H
2015-01-01
This research advances the knowledge of ion-exchange of four non-steroidal anti-inflammatory drugs (NSAIDs) - diclofenac (DCF), ibuprofen (IBP), ketoprofen (KTP), and naproxen (NPX) - and one analgesic drug-paracetamol (PCM) - by strong-base anion exchange resin (AER) in synthetic ureolyzed urine. Freundlich, Langmuir, Dubinin-Astakhov, and Dubinin-Radushkevich isotherm models were fit to experimental equilibrium data using nonlinear least squares method. Favorable ion-exchange was observed for DCF, KTP, and NPX, whereas unfavorable ion-exchange was observed for IBP and PCM. The ion-exchange selectivity of the AER was enhanced by van der Waals interactions between the pharmaceutical and AER as well as the hydrophobicity of the pharmaceutical. For instance, the high selectivity of the AER for DCF was due to the combination of Coulombic interactions between quaternary ammonium functional group of resin and carboxylate functional group of DCF, van der Waals interactions between polystyrene resin matrix and benzene rings of DCF, and possibly hydrogen bonding between dimethylethanol amine functional group side chain and carboxylate and amine functional groups of DCF. Based on analysis of covariance, the presence of multiple pharmaceuticals did not have a significant effect on ion-exchange removal when the NSAIDs were combined in solution. The AER reached saturation of the pharmaceuticals in a continuous-flow column at varying bed volumes following a decreasing order of DCF > NPX ≈ KTP > IBP. Complete regeneration of the column was achieved using a 5% (m/m) NaCl, equal-volume water-methanol solution. Results from multiple treatment and regeneration cycles provide insight into the practical application of pharmaceutical ion-exchange in ureolyzed urine using AER.
The Involvement of Lipid Peroxide-Derived Aldehydes in Aluminum Toxicity of Tobacco Roots1[W][OA
Yin, Lina; Mano, Jun'ichi; Wang, Shiwen; Tsuji, Wataru; Tanaka, Kiyoshi
2010-01-01
Oxidative injury of the root elongation zone is a primary event in aluminum (Al) toxicity in plants, but the injuring species remain unidentified. We verified the hypothesis that lipid peroxide-derived aldehydes, especially highly electrophilic α,β-unsaturated aldehydes (2-alkenals), participate in Al toxicity. Transgenic tobacco (Nicotiana tabacum) overexpressing Arabidopsis (Arabidopsis thaliana) 2-alkenal reductase (AER-OE plants), wild-type SR1, and an empty vector-transformed control line (SR-Vec) were exposed to AlCl3 on their roots. Compared with the two controls, AER-OE plants suffered less retardation of root elongation under AlCl3 treatment and showed more rapid regrowth of roots upon Al removal. Under AlCl3 treatment, the roots of AER-OE plants accumulated Al and H2O2 to the same levels as did the sensitive controls, while they accumulated lower levels of aldehydes and suffered less cell death than SR1 and SR-Vec roots. In SR1 roots, AlCl3 treatment markedly increased the contents of the highly reactive 2-alkenals acrolein, 4-hydroxy-(E)-2-hexenal, and 4-hydroxy-(E)-2-nonenal and other aldehydes such as malondialdehyde and formaldehyde. In AER-OE roots, accumulation of these aldehydes was significantly less. Growth of the roots exposed to 4-hydroxy-(E)-2-nonenal and (E)-2-hexenal were retarded more in SR1 than in AER-OE plants. Thus, the lipid peroxide-derived aldehydes, formed downstream of reactive oxygen species, injured root cells directly. Their suppression by AER provides a new defense mechanism against Al toxicity. PMID:20023145
2011-01-01
Background Reprocessing of endoscopes generally requires labour-intensive manual cleaning followed by high-level disinfection in an automated endoscope reprocessor (AER). EVOTECH Endoscope Cleaner and Reprocessor (ECR) is approved for fully automated cleaning and disinfection whereas AERs require manual cleaning prior to the high-level disinfection procedure. The purpose of this economic evaluation was to determine the cost-efficiency of the ECR versus AER methods of endoscopy reprocessing in an actual practice setting. Methods A time and motion study was conducted at a Canadian hospital to collect data on the personnel resources and consumable supplies costs associated with the use of EVOTECH ECR versus manual cleaning followed by AER with Medivators DSD-201. Reprocessing of all endoscopes was observed and timed for both reprocessor types over three days. Laboratory staff members were interviewed regarding the consumption and cost of all disposable supplies and equipment. Exact Wilcoxon rank sum test was used for assessing differences in total cycle reprocessing time. Results Endoscope reprocessing was significantly shorter with the ECR than with manual cleaning followed by AER. The differences in median time were 12.46 minutes per colonoscope (p < 0.0001), 6.31 minutes per gastroscope (p < 0.0001), and 5.66 minutes per bronchoscope (p = 0.0040). Almost 2 hours of direct labour time was saved daily with the ECR. The total per cycle cost of consumables and labour for maintenance was slightly higher for EVOTECH ECR versus manual cleaning followed by AER ($8.91 versus $8.31, respectively). Including the cost of direct labour time consumed in reprocessing scopes, the per cycle and annual costs of using the EVOTECH ECR was less than the cost of manual cleaning followed by AER disinfection ($11.50 versus $11.88). Conclusions The EVOTECH ECR was more efficient and less costly to use for the reprocessing of endoscopes than manual cleaning followed by AER disinfection. Although the cost of consumable supplies required to reprocess endoscopes with EVOTECH ECR was slightly higher, the value of the labour time saved with EVOTECH ECR more than offset the additional consumables cost. The increased efficiency with EVOTECH ECR could lead to even further cost-savings by shifting endoscopy laboratory personnel responsibilities but further study is required. PMID:21967345
Forte, Lindy; Shum, Cynthia
2011-10-03
Reprocessing of endoscopes generally requires labour-intensive manual cleaning followed by high-level disinfection in an automated endoscope reprocessor (AER). EVOTECH Endoscope Cleaner and Reprocessor (ECR) is approved for fully automated cleaning and disinfection whereas AERs require manual cleaning prior to the high-level disinfection procedure. The purpose of this economic evaluation was to determine the cost-efficiency of the ECR versus AER methods of endoscopy reprocessing in an actual practice setting. A time and motion study was conducted at a Canadian hospital to collect data on the personnel resources and consumable supplies costs associated with the use of EVOTECH ECR versus manual cleaning followed by AER with Medivators DSD-201. Reprocessing of all endoscopes was observed and timed for both reprocessor types over three days. Laboratory staff members were interviewed regarding the consumption and cost of all disposable supplies and equipment. Exact Wilcoxon rank sum test was used for assessing differences in total cycle reprocessing time. Endoscope reprocessing was significantly shorter with the ECR than with manual cleaning followed by AER. The differences in median time were 12.46 minutes per colonoscope (p < 0.0001), 6.31 minutes per gastroscope (p < 0.0001), and 5.66 minutes per bronchoscope (p = 0.0040). Almost 2 hours of direct labour time was saved daily with the ECR. The total per cycle cost of consumables and labour for maintenance was slightly higher for EVOTECH ECR versus manual cleaning followed by AER ($8.91 versus $8.31, respectively). Including the cost of direct labour time consumed in reprocessing scopes, the per cycle and annual costs of using the EVOTECH ECR was less than the cost of manual cleaning followed by AER disinfection ($11.50 versus $11.88). The EVOTECH ECR was more efficient and less costly to use for the reprocessing of endoscopes than manual cleaning followed by AER disinfection. Although the cost of consumable supplies required to reprocess endoscopes with EVOTECH ECR was slightly higher, the value of the labour time saved with EVOTECH ECR more than offset the additional consumables cost. The increased efficiency with EVOTECH ECR could lead to even further cost-savings by shifting endoscopy laboratory personnel responsibilities but further study is required.
Incipient and overt diabetic nephropathy in African Americans with NIDDM.
Dasmahapatra, A; Bale, A; Raghuwanshi, M P; Reddi, A; Byrne, W; Suarez, S; Nash, F; Varagiannis, E; Skurnick, J H
1994-04-01
OBJECTIVE--To determine the prevalence of incipient and overt nephropathy in African-American subjects with non-insulin-dependent diabetes mellitus (NIDDM) attending a hospital clinic. Contributory factors, such as blood pressure (BP), duration and age at onset of diabetes, hyperglycemia, hyperlipidemia, and body mass index (BMI) also were evaluated. RESEARCH DESIGN AND METHODS--We recruited 116 African-American subjects with NIDDM for this cross-sectional, descriptive, and analytical study. BP, BMI, 24-h urine albumin excretion, creatinine clearance, serum creatinine, lipids, and GHb levels were measured. Albumin excretion rate (AER) was calculated, and subjects were divided into three groups: no nephropathy (AER < 20 micrograms/min), incipient nephropathy (AER 20-200 micrograms/min), and overt nephropathy (AER > 200 micrograms/min). Frequency of hypertension and nephropathy was analyzed by chi 2 testing, group means were compared using analysis of variance, and linear correlations were performed between AER and other variables. Multiple regression analysis was used to examine the association of these variables while controlling for the effects of other variables. RESULTS--Increased AER was present in 50% of our subjects; 31% had incipient and 19% had overt nephropathy. Hypertension was present in 72.4%; nephropathy, particularly overt nephropathy, was significantly more prevalent in the hypertensive group. Mean BP and diastolic blood pressure (dBP) were higher in the groups with incipient and overt nephropathy, and systolic blood pressure (sBP) was increased in overt nephropathy. Men with either form of nephropathy had higher sBP, dBP, and mean BP, whereas only women with overt nephropathy had increased sBP and mean BP. Subjects with incipient or overt nephropathy had a longer duration of diabetes, and those with overt nephropathy had a younger age at onset of diabetes. By multiple regression analysis, AER correlated with younger age at diabetes onset, but not with diabetes duration. No correlation with age, lipid levels, or GHb was noted. BMI correlated with AER. CONCLUSIONS--Incipient and overt nephropathy were observed frequently in these African-American subjects with NIDDM. Albuminuria correlated with BP, younger age at diabetes onset, and BMI. Association of albuminuria and increased cardiovascular mortality may place 50% of inner-city African-American patients with NIDDM at risk for developing cardiovascular complications.
Structure of CARB-4 and AER-1 CarbenicillinHydrolyzing β-Lactamases
Sanschagrin, François; Bejaoui, Noureddine; Levesque, Roger C.
1998-01-01
We determined the nucleotide sequences of blaCARB-4 encoding CARB-4 and deduced a polypeptide of 288 amino acids. The gene was characterized as a variant of group 2c carbenicillin-hydrolyzing β-lactamases such as PSE-4, PSE-1, and CARB-3. The level of DNA homology between the bla genes for these β-lactamases varied from 98.7 to 99.9%, while that between these genes and blaCARB-4 encoding CARB-4 was 86.3%. The blaCARB-4 gene was acquired from some other source because it has a G+C content of 39.1%, compared to a G+C content of 67% for typical Pseudomonas aeruginosa genes. DNA sequencing revealed that blaAER-1 shared 60.8% DNA identity with blaPSE-3 encoding PSE-3. The deduced AER-1 β-lactamase peptide was compared to class A, B, C, and D enzymes and had 57.6% identity with PSE-3, including an STHK tetrad at the active site. For CARB-4 and AER-1, conserved canonical amino acid boxes typical of class A β-lactamases were identified in a multiple alignment. Analysis of the DNA sequences flanking blaCARB-4 and blaAER-1 confirmed the importance of gene cassettes acquired via integrons in bla gene distribution. PMID:9687391
Zamarreno-Ramos, C; Linares-Barranco, A; Serrano-Gotarredona, T; Linares-Barranco, B
2013-02-01
This paper presents a modular, scalable approach to assembling hierarchically structured neuromorphic Address Event Representation (AER) systems. The method consists of arranging modules in a 2D mesh, each communicating bidirectionally with all four neighbors. Address events include a module label. Each module includes an AER router which decides how to route address events. Two routing approaches have been proposed, analyzed and tested, using either destination or source module labels. Our analyses reveal that depending on traffic conditions and network topologies either one or the other approach may result in better performance. Experimental results are given after testing the approach using high-end Virtex-6 FPGAs. The approach is proposed for both single and multiple FPGAs, in which case a special bidirectional parallel-serial AER link with flow control is exploited, using the FPGA Rocket-I/O interfaces. Extensive test results are provided exploiting convolution modules of 64 × 64 pixels with kernels with sizes up to 11 × 11, which process real sensory data from a Dynamic Vision Sensor (DVS) retina. One single Virtex-6 FPGA can hold up to 64 of these convolution modules, which is equivalent to a neural network with 262 × 10(3) neurons and almost 32 million synapses.
Indoor environmental quality in French dwellings and building characteristics
NASA Astrophysics Data System (ADS)
Langer, Sarka; Ramalho, Olivier; Derbez, Mickaël; Ribéron, Jacques; Kirchner, Severine; Mandin, Corinne
2016-03-01
A national survey on indoor environmental quality covering 567 residences in mainland France was performed during 2003-2005. The measured parameters were temperature, relative humidity, CO2, and the indoor air pollutants: fourteen individual volatile organic compounds (VOC), four aldehydes and particulate matter PM10 and PM2.5. The measured indoor concentrations were analyzed for correlations with the building characteristics: type of dwelling, period of construction, dwelling location, type of ventilation system, building material, attached garage and retrofitting. The median night time air exchange rate (AER) for all dwellings was 0.44 h-1. The night time AER was higher in apartments (median = 0.49 h-1) than in single-family houses (median = 0.41 h-1). Concentration of formaldehyde was approximately 30% higher in dwellings built after 1990 compared with older ones; it was higher in dwellings with mechanical ventilation and in concrete buildings. The VOC concentrations depended on the building characteristics to various extents. The sampling season influenced the majority of the indoor climate parameters and the concentrations of the air pollutants to a higher degree than the building characteristics. Multivariate linear regression models revealed that the indoor-outdoor difference in specific humidity, a proxy for number of occupants and their indoor activities, remained a significant predictor for most gaseous and particulate air pollutants. The other strong predictors were outdoor concentration, smoking, attached garage and AER (in descending order).
Ekinci, Elif I; Thomas, Georgina; Thomas, David; Johnson, Cameron; Macisaac, Richard J; Houlihan, Christine A; Finch, Sue; Panagiotopoulos, Sianna; O'Callaghan, Chris; Jerums, George
2009-08-01
OBJECTIVE This prospective randomized double-blind placebo-controlled crossover study examined the effects of sodium chloride (NaCl) supplementation on the antialbuminuric action of telmisartan with or without hydrochlorothiazide (HCT) in hypertensive patients with type 2 diabetes, increased albumin excretion rate (AER), and habitual low dietary salt intake (LDS; <100 mmol sodium/24 h on two of three consecutive occasions) or high dietary salt intake (HDS; >200 mmol sodium/24 h on two of three consecutive occasions). RESEARCH DESIGN AND METHODS Following a washout period, subjects (n = 32) received 40 mg/day telmisartan for 4 weeks followed by 40 mg telmisartan plus 12.5 mg/day HCT for 4 weeks. For the last 2 weeks of each treatment period, patients received either 100 mmol/day NaCl or placebo capsules. After a second washout, the regimen was repeated with supplements in reverse order. AER and ambulatory blood pressure were measured at weeks 0, 4, 8, 14, 18, and 22. RESULTS In LDS, NaCl supplementation reduced the anti-albuminuric effect of telmisartan with or without HCT from 42.3% (placebo) to 9.5% (P = 0.004). By contrast, in HDS, NaCl supplementation did not reduce the AER response to telmisartan with or without HCT (placebo 30.9%, NaCl 28.1%, P = 0.7). Changes in AER were independent of changes in blood pressure. CONCLUSIONS The AER response to telmisartan with or without HCT under habitual low salt intake can be blunted by NaCl supplementation. By contrast, when there is already a suppressed renin angiotensin aldosterone system under habitual high dietary salt intake, the additional NaCl does not alter the AER response.
Ekinci, Elif I.; Thomas, Georgina; Thomas, David; Johnson, Cameron; MacIsaac, Richard J.; Houlihan, Christine A.; Finch, Sue; Panagiotopoulos, Sianna; O'Callaghan, Chris; Jerums, George
2009-01-01
OBJECTIVE This prospective randomized double-blind placebo-controlled crossover study examined the effects of sodium chloride (NaCl) supplementation on the antialbuminuric action of telmisartan with or without hydrochlorothiazide (HCT) in hypertensive patients with type 2 diabetes, increased albumin excretion rate (AER), and habitual low dietary salt intake (LDS; <100 mmol sodium/24 h on two of three consecutive occasions) or high dietary salt intake (HDS; >200 mmol sodium/24 h on two of three consecutive occasions). RESEARCH DESIGN AND METHODS Following a washout period, subjects (n = 32) received 40 mg/day telmisartan for 4 weeks followed by 40 mg telmisartan plus 12.5 mg/day HCT for 4 weeks. For the last 2 weeks of each treatment period, patients received either 100 mmol/day NaCl or placebo capsules. After a second washout, the regimen was repeated with supplements in reverse order. AER and ambulatory blood pressure were measured at weeks 0, 4, 8, 14, 18, and 22. RESULTS In LDS, NaCl supplementation reduced the anti-albuminuric effect of telmisartan with or without HCT from 42.3% (placebo) to 9.5% (P = 0.004). By contrast, in HDS, NaCl supplementation did not reduce the AER response to telmisartan with or without HCT (placebo 30.9%, NaCl 28.1%, P = 0.7). Changes in AER were independent of changes in blood pressure. CONCLUSIONS The AER response to telmisartan with or without HCT under habitual low salt intake can be blunted by NaCl supplementation. By contrast, when there is already a suppressed renin angiotensin aldosterone system under habitual high dietary salt intake, the additional NaCl does not alter the AER response. PMID:19549737
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...
Burrows, Jill E.; Cravotta, Charles A.; Peters, Stephen C.
2017-01-01
Net-alkaline, anoxic coal-mine drainage containing ∼20 mg/L FeII and ∼0.05 mg/L Al and Zn was subjected to parallel batch experiments: control, aeration (Aer 1 12.6 mL/s; Aer 2 16.8 mL/s; Aer 3 25.0 mL/s), and hydrogen peroxide (H2O2) to test the hypothesis that aeration increases pH, FeII oxidation, hydrous FeIII oxide (HFO) formation, and trace-metal removal through adsorption and coprecipitation with HFO. During 5.5-hr field experiments, pH increased from 6.4 to 6.7, 7.1, 7.6, and 8.1 for the control, Aer 1, Aer 2, and Aer 3, respectively, but decreased to 6.3 for the H2O2 treatment. Aeration accelerated removal of dissolved CO2, Fe, Al, and Zn. In Aer 3, dissolved Al was completely removed within 1 h, but increased to ∼20% of the initial concentration after 2.5 h when pH exceeded 7.5. H2O2 promoted rapid removal of all dissolved Fe and Al, and 13% of dissolved Zn.Kinetic modeling with PHREEQC simulated effects of aeration on pH, CO2, Fe, Zn, and Al. Aeration enhanced Zn adsorption by increasing pH and HFO formation while decreasing aqueous CO2 available to form ZnCO30 and Zn(CO3)22− at high pH. Al concentrations were inconsistent with solubility control by Al minerals or Al-containing HFO, but could be simulated by adsorption on HFO at pH < 7.5 and desorption at higher pH where Al(OH)4− was predominant. Thus, aeration or chemical oxidation with pH adjustment to ∼7.5 could be effective for treating high-Fe and moderate-Zn concentrations, whereas chemical oxidation without pH adjustment may be effective for treating high-Fe and moderate-Al concentrations.
Res-E Support Policies in the Baltic States: Electricity Price Aspect (Part II)
NASA Astrophysics Data System (ADS)
Bobinaite, V.; Priedite, I.
2015-04-01
Increasing volumes of electricity derived from renewable energy sources (RES-E) affect the electricity market prices and the prices for final electricity consumers in the Baltic States. The results of a multivariate regression analysis show that in 2013 the RES-E contributed to decreasing the electricity market prices in the Baltic States. However, the final electricity consumers pay for the promotion of RES-E through the approved RES-E component which has a tendency to increase. It is estimated that in 2013 the net benefits from the wind electricity promotion were achieved in Lithuania and Latvia while the net cost - in Estonia. This suggests that the economic efficiency of the wind electricity support scheme based on the application of feed-in tariffs was higher than that based on the feed-in premium. Rakstā analizēta elektroenerģijas ražošanas no atjaunojamiem energoresursiem (AER-E) palielināšanas ietekme uz elektroenerģijas tirgus cenu un gala cenu elektroenerģijas lietotājiem Baltijas valstīs. Daudzfaktoru regresijas analīzes rezultāti atklāja, ka AER-E 2013. gadā varētu samazināt elektroenerģijas tirgus cenas Baltijas valstīs. Tomēr jāņem vērā, ka elektroenerģijas lietotāja gala cenā ir iekļauta AER-E atbalsta komponente, kurai ir raksturīgi palielināties. Aprēķināts, ka no vēja elektroenerģijas ražošanas Latvijā un Lietuvā tika iegūta tīrā peļņa, bet Igaunijā tikai nosedza pašizmaksu. Tas liecina, ka vēja elektroenerģijas atbalsta shēmai, kas balstīta uz obligātā iepirkuma atbalsta principu, ir augstāka ekonomiskā efektivitāte, nekā atbalsta shēmai, kas balstīta uz piemaksu par no AER saražoto elektroenerģiju obligātā iepirkuma ietvaros.
Penno, G; Chaturvedi, N; Talmud, P J; Cotroneo, P; Manto, A; Nannipieri, M; Luong, L A; Fuller, J H
1998-09-01
We examined whether the ACE gene insertion/deletion (I/D) polymorphism modulates renal disease progression in IDDM and how ACE inhibitors influence this relationship. The EURODIAB Controlled Trial of Lisinopril in IDDM is a multicenter randomized placebo-controlled trial in 530 nonhypertensive, mainly normoalbuminuric IDDM patients aged 20-59 years. Albumin excretion rate (AER) was measured every 6 months for 2 years. Genotype distribution was 15% II, 58% ID, and 27% DD. Between genotypes, there were no differences in baseline characteristics or in changes in blood pressure and glycemic control throughout the trial. There was a significant interaction between the II and DD genotype groups and treatment on change in AER (P = 0.05). Patients with the II genotype showed the fastest rate of AER progression on placebo but had an enhanced response to lisinopril. AER at 2 years (adjusted for baseline AER) was 51.3% lower on lisinopril than placebo in the II genotype patients (95% CI, 15.7 to 71.8; P = 0.01), 14.8% in the ID group (-7.8 to 32.7; P = 0.2), and 7.7% in the DD group (-36.6 to 37.6; P = 0.7). Absolute differences in AER between placebo and lisinopril at 2 years were 8.1, 1.7, and 0.8 microg/min in the II, ID, and DD groups, respectively. The significant beneficial effect of lisinopril on AER in the II group persisted when adjusted for center, blood pressure, and glycemic control, and also for diastolic blood pressure at 1 month into the study. Progression from normoalbuminuria to microalbuminuria (lisinopril versus placebo) was 0.27 (0.03-2.26; P = 0.2) in the II group, and 1.30 (0.33-5.17; P = 0.7) in the DD group (P = 0.6 for interaction). Knowledge of ACE genotype may be of value in determining the likely impact of ACE inhibitor treatment.
Impact of line parameter database and continuum absorption on GOSAT TIR methane retrieval
NASA Astrophysics Data System (ADS)
Yamada, A.; Saitoh, N.; Nonogaki, R.; Imasu, R.; Shiomi, K.; Kuze, A.
2017-12-01
The current methane retrieval algorithm (V1) at wavenumber range from 1210 cm-1 to 1360 cm-1 including CH4 ν 4 band from the thermal infrared (TIR) band of Thermal and Near-infrared Sensor for Carbon Observation Fourier Transform Spectrometer (TANSO-FTS) onboard Greenhouse Gases Observing Satellite (GOSAT) uses LBLRTM V12.1 with AER V3.1 line database and MT CKD 2.5.2 continuum absorption model to calculate optical depth. Since line parameter databases have been updated and the continuum absorption may have large uncertainty, the purpose of this study is to assess the impact on {CH}4 retrieval from the choice of line parameter databases and the uncertainty of continuum absorption. We retrieved {CH}4 profiles with replacement of line parameter database from AER V3.1 to AER v1.0, HITRAN 2004, HITRAN 2008, AER V3.2, or HITRAN 2012 (Rothman et al. 2005, 2009, and 2013. Clough et al., 2005), we assumed 10% larger continuum absorption coefficients and 50% larger temperature dependent coefficient of continuum absorption based on the report by Paynter and Ramaswamy (2014). We compared the retrieved CH4 with the HIPPO CH4 observation (Wofsy et al., 2012). The difference from HIPPO observation of AER V3.2 was the smallest and 24.1 ± 45.9 ppbv. The differences of AER V1.0, HITRAN 2004, HITRAN 2008, and HITRAN 2012 were 35.6 ± 46.5 ppbv, 37.6 ± 46.3 ppbv, 32.1 ± 46.1 ppbv, and 35.2 ± 46.0 ppbv, respectively. Maximum {CH}4 retrieval differences were -0.4 ppbv at the layer of 314 hPa when we used 10% larger absorption coefficients of {H}2O foreign continuum. Comparing AER V3.2 case to HITRAN 2008 case, the line coupling effect reduced difference by 8.0 ppbv. Line coupling effects were important for GOSAT TIR {CH}4 retrieval. Effects from the uncertainty of continuum absorption were negligible small for GOSAT TIR CH4 retrieval.
NASA Astrophysics Data System (ADS)
Meng, Qing Yu; Spector, Dalia; Colome, Steven; Turpin, Barbara
2009-12-01
Effects of physical/environmental factors on fine particle (PM 2.5) exposure, outdoor-to-indoor transport and air exchange rate ( AER) were examined. The fraction of ambient PM 2.5 found indoors ( F INF) and the fraction to which people are exposed ( α) modify personal exposure to ambient PM 2.5. Because F INF, α, and AER are infrequently measured, some have used air conditioning (AC) as a modifier of ambient PM 2.5 exposure. We found no single variable that was a good predictor of AER. About 50% and 40% of the variation in F INF and α, respectively, was explained by AER and other activity variables. AER alone explained 36% and 24% of the variations in F INF and α, respectively. Each other predictor, including Central AC Operation, accounted for less than 4% of the variation. This highlights the importance of AER measurements to predict F INF and α. Evidence presented suggests that outdoor temperature and home ventilation features affect particle losses as well as AER, and the effects differ. Total personal exposures to PM 2.5 mass/species were reconstructed using personal activity and microenvironmental methods, and compared to direct personal measurement. Outdoor concentration was the dominant predictor of (partial R2 = 30-70%) and the largest contributor to (20-90%) indoor and personal exposures for PM 2.5 mass and most species. Several activities had a dramatic impact on personal PM 2.5 mass/species exposures for the few study participants exposed to or engaged in them, including smoking and woodworking. Incorporating personal activities (in addition to outdoor PM 2.5) improved the predictive power of the personal activity model for PM 2.5 mass/species; more detailed information about personal activities and indoor sources is needed for further improvement (especially for Ca, K, OC). Adequate accounting for particle penetration and persistence indoors and for exposure to non-ambient sources could potentially increase the power of epidemiological analyses linking health effects to particulate exposures.
Marcus, Robin L; Smith, Sheldon; Morrell, Glen; Addison, Odessa; Dibble, Leland E; Wahoff-Stice, Donna; LaStayo, Paul C
2008-01-01
Background and Purpose: The purpose of this study was to compare the outcomes between a diabetes exercise training program using combined aerobic and high-force eccentric resistance exercise and a program of aerobic exercise only. Subjects and Methods: Fifteen participants with type 2 diabetes mellitus (T2DM) participated in a 16-week supervised exercise training program: 7 (mean age=50.7 years, SD=6.9) in a combined aerobic and eccentric resistance exercise program (AE/RE group) and 8 (mean age=58.5 years, SD=6.2) in a program of aerobic exercise only (AE group). Outcome measures included thigh lean tissue and intramuscular fat (IMF), glycosylated hemoglobin, body mass index (BMI), and 6-minute walk distance. Results: Both groups experienced decreases in mean glycosylated hemoglobin after training (AE/RE group: −0.59% [95% confidence interval (CI)=−1.5 to 0.28]; AE group: −0.31% [95% CI=−0.60 to −0.03]), with no significant between-group differences. There was an interaction between group and time with respect to change in thigh lean tissue cross-sectional area, with the AE/RE group gaining more lean tissue (AE/RE group: 15.1 cm2 [95% CI=7.6 to 22.5]; AE group: −5.6 cm2 [95% CI=−10.4 to 0.76]). Both groups experienced decreases in mean thigh IMF cross-sectional area (AE/RE group: −1.2 cm2 [95% CI=−2.6 to 0.26]; AE group: −2.2 cm2 [95% CI=−3.5 to −0.84]) and increases in 6-minute walk distance (AE/RE group: 45.5 m [95% CI=7.5 to 83.6]; AE group: 29.9 m [95% CI=−7.7 to 67.5]) after training, with no between-group differences. There was an interaction between group and time with respect to change in BMI, with the AE/RE group experiencing a greater decrease in BMI. Discussion and Conclusion: Significant improvements in long-term glycemic control, thigh composition, and physical performance were demonstrated in both groups after participating in a 16-week exercise program. Subjects in the AE/RE group demonstrated additional improvements in thigh lean tissue and BMI. Improvements in thigh lean tissue may be important in this population as a means to increase resting metabolic rate, protein reserve, exercise tolerance, and functional mobility. PMID:18801851
Shirazi, Elham; Pennell, Kelly G
2017-12-13
Vapor intrusion (IV) exposure risks are difficult to characterize due to the role of atmospheric, building and subsurface processes. This study presents a three-dimensional VI model that extends the common subsurface fate and transport equations to incorporate wind and stack effects on indoor air pressure, building air exchange rate (AER) and indoor contaminant concentration to improve VI exposure risk estimates. The model incorporates three modeling programs: (1) COMSOL Multiphysics to model subsurface fate and transport processes, (2) CFD0 to model atmospheric air flow around the building, and (3) CONTAM to model indoor air quality. The combined VI model predicts AER values, zonal indoor air pressures and zonal indoor air contaminant concentrations as a function of wind speed, wind direction and outdoor and indoor temperature. Steady state modeling results for a single-story building with a basement demonstrate that wind speed, wind direction and opening locations in a building play important roles in changing the AER, indoor air pressure, and indoor air contaminant concentration. Calculated indoor air pressures ranged from approximately -10 Pa to +4 Pa depending on weather conditions and building characteristics. AER values, mass entry rates and indoor air concentrations vary depending on weather conditions and building characteristics. The presented modeling approach can be used to investigate the relationship between building features, AER, building pressures, soil gas concentrations, indoor air concentrations and VI exposure risks.
Review of Air Exchange Rate Models for Air Pollution Exposure Assessments
A critical aspect of air pollution exposure assessments is estimation of the air exchange rate (AER) for various buildings, where people spend their time. The AER, which is rate the exchange of indoor air with outdoor air, is an important determinant for entry of outdoor air pol...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-19
... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] AER Energy Resources, Inc.; Alto Group Holdings, Inc.; Bizrocket.Com Inc.; Fox Petroleum, Inc.; Geopulse Explorations Inc.; Global Technologies... accuracy of press releases concerning the company's revenues. 4. Fox Petroleum, Inc. is a Nevada...
75 FR 66195 - Schedules of Controlled Substances: Placement of Propofol Into Schedule IV
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-27
... published abuse liability studies of propofol in humans in which the reinforcement and reward effects have... reporting by the subject feeling ``high,'' relative to the placebo. The motivation for abuse of propofol is... Reporting System (AERS) DataMart database). In the AERS database, there are reports of propofol diversion...
Linking In-Vehicle Ultrafine Particle Exposures to On-Road Concentrations
Hudda, Neelakshi; Eckel, Sandrah P.; Knibbs, Luke D.; Sioutas, Constantinos; Delfino, Ralph J.; Fruin, Scott A.
2013-01-01
For traffic-related pollutants like ultrafine particles (UFP, Dp < 100 nm), a significant fraction of overall exposure occurs within or close to the transit microenvironment. Therefore, understanding exposure to these pollutants in such microenvironments is crucial to accurately assessing overall UFP exposure. The aim of this study was to develop models for predicting in-cabin UFP concentrations if roadway concentrations are known, taking into account vehicle characteristics, ventilation settings, driving conditions and air exchange rates (AER). Particle concentrations and AER were measured in 43 and 73 vehicles, respectively, under various ventilation settings and driving speeds. Multiple linear regression (MLR) and generalized estimating equation (GEE) regression models were used to identify and quantify the factors that determine inside-to-outside (I/O) UFP ratios and AERs across a full range of vehicle types and ages. AER was the most significant determinant of UFP I/O ratios, and was strongly influenced by ventilation setting (recirculation or outside air intake). Inclusion of ventilation fan speed, vehicle age or mileage, and driving speed explained greater than 79% of the variability in measured UFP I/O ratios. PMID:23888122
Yang, Na; Ren, Yueping; Li, Xiufen; Wang, Xinhua
2017-06-01
Anolyte acidification is a drawback restricting the electricity generation performance of the buffer-free microbial fuel cells (MFC). In this paper, a small amount of alkali-treated anion exchange resin (AER) was placed in front of the anode in the KCl mediated single-chamber MFC to slowly release hydroxyl ions (OH - ) and neutralize the H + ions that are generated by the anodic reaction in two running cycles. This short-term alkaline intervention to the KCl anolyte has promoted the proliferation of electroactive Geobacter sp. and enhanced the self-buffering capacity of the KCl-AER-MFC. The pH of the KCl anolyte in the KCl-AER-MFC increased and became more stable in each running cycle compared with that of the KCl-MFC after the short-term alkaline intervention. The maximum power density (P max ) of the KCl-AER-MFC increased from 307.5mW·m -2 to 542.8mW·m -2 , slightly lower than that of the PBS-MFC (640.7mW·m -2 ). The coulombic efficiency (CE) of the KCl-AER-MFC increased from 54.1% to 61.2% which is already very close to that of the PBS-MFC (61.9%). The results in this paper indicate that short-term alkaline intervention to the anolyte is an effective strategy to further promote the performance of buffer-free MFCs. Copyright © 2017 Elsevier B.V. All rights reserved.
Abu-Elala, N; Abdelsalam, M; Marouf, Sh; Setta, A
2015-11-01
The nucleotide sequence analysis of the gyrB gene indicated that the fish Aeromonas spp. isolates could be identified as Aeromonas hydrophila and Aeromonas veronii biovar sobria, whereas chicken Aeromonas spp. isolates identified as Aeromonas caviae. PCR data revealed the presence of Lip, Ser, Aer, ACT and CAI genes in fish Aer. hydrophila isolates, ACT, CAI and Aer genes in fish Aer. veronii bv sobria isolates and Ser and CAI genes in chicken Aer. caviae isolates. All chicken isolates showed variable resistance against all 12 tested antibiotic discs except for cefotaxime, nitrofurantoin, chloramphenicol and ciprofloxacin, only one isolate showed resistance to chloramphenicol and ciprofloxacin. Fish Aeromonads were sensitive to all tested antibiotic discs except amoxicillin, ampicillin-sulbactam and streptomycin. Many integrated fish farms depend on the application of poultry droppings/litter which served as a direct feed for the fish and also acted as pond fertilizers. The application of untreated poultry manure exerts an additional pressure on the microbial world of the fish's environment. Aeromonas species are one of the common bacteria that infect both fish and chicken. The aim of this study was to compare the phenotypic traits and genetic relatedness of aeromonads isolated from two diverse hosts (terrestrial and aquatic), and to investigate if untreated manure possibly enhances Aeromonas dissemination among cohabitant fish with special reference to virulence genes and antibiotic resistant traits. © 2015 The Society for Applied Microbiology.
Conte, Daniele; Garaffo, Giulia; Lo Iacono, Nadia; Mantero, Stefano; Piccolo, Stefano; Cordenonsi, Michelangelo; Perez-Morga, David; Orecchia, Valeria; Poli, Valeria; Merlo, Giorgio R.
2016-01-01
The congenital malformation split hand/foot (SHFM) is characterized by missing central fingers and dysmorphology or fusion of the remaining ones. Type-1 SHFM is linked to deletions/rearrangements of the DLX5–DLX6 locus and point mutations in the DLX5 gene. The ectrodactyly phenotype is reproduced in mice by the double knockout (DKO) of Dlx5 and Dlx6. During limb development, the apical ectodermal ridge (AER) is a key-signaling center responsible for early proximal–distal growth and patterning. In Dlx5;6 DKO hindlimbs, the central wedge of the AER loses multilayered organization and shows down-regulation of FGF8 and Dlx2. In search for the mechanism, we examined the non-canonical Wnt signaling, considering that Dwnt-5 is a target of distalless in Drosophila and the knockout of Wnt5, Ryk, Ror2 and Vangl2 in the mouse causes severe limb malformations. We found that in Dlx5;6 DKO limbs, the AER expresses lower levels of Wnt5a, shows scattered β-catenin responsive cells and altered basolateral and planar cell polarity (PCP). The addition of Wnt5a to cultured embryonic limbs restored the expression of AER markers and its stratification. Conversely, the inhibition of the PCP molecule c-jun N-terminal kinase caused a loss of AER marker expression. In vitro, the addition of Wnt5a on mixed primary cultures of embryonic ectoderm and mesenchyme was able to confer re-polarization. We conclude that the Dlx-related ectrodactyly defect is associated with the loss of basoapical and PCP, due to reduced Wnt5a expression and that the restoration of the Wnt5a level is sufficient to partially reverts AER misorganization and dysmorphology. PMID:26685160
Johnson, Ted; Myers, Jeffrey; Kelly, Thomas; Wisbith, Anthony; Ollison, Will
2004-01-01
A pilot study was conducted using an occupied, single-family test house in Columbus, OH, to determine whether a script-based protocol could be used to obtain data useful in identifying the key factors affecting air-exchange rate (AER) and the relationship between indoor and outdoor concentrations of selected traffic-related air pollutants. The test script called for hourly changes to elements of the test house considered likely to influence air flow and AER, including the position (open or closed) of each window and door and the operation (on/off) of the furnace, air conditioner, and ceiling fans. The script was implemented over a 3-day period (January 30-February 1, 2002) during which technicians collected hourly-average data for AER, indoor, and outdoor air concentrations for six pollutants (benzene, formaldehyde (HCHO), polycyclic aromatic hydrocarbons (PAH), carbon monoxide (CO), nitric oxide (NO), and nitrogen oxides (NO(x))), and selected meteorological variables. Consistent with expectations, AER tended to increase with the number of open exterior windows and doors. The 39 AER values measured during the study when all exterior doors and windows were closed varied from 0.36 to 2.29 h(-1) with a geometric mean (GM) of 0.77 h(-1) and a geometric standard deviation (GSD) of 1.435. The 27 AER values measured when at least one exterior door or window was opened varied from 0.50 to 15.8 h(-1) with a GM of 1.98 h(-1) and a GSD of 1.902. AER was also affected by temperature and wind speed, most noticeably when exterior windows and doors were closed. Results of a series of stepwise linear regression analyses suggest that (1) outdoor pollutant concentration and (2) indoor pollutant concentration during the preceding hour were the "variables of choice" for predicting indoor pollutant concentration in the test house under the conditions of this study. Depending on the pollutant and ventilation conditions, one or more of the following variables produced a small, but significant increase in the explained variance (R(2)-value) of the regression equations: AER, number and location of apertures, wind speed, air-conditioning operation, indoor temperature, outdoor temperature, and relative humidity. The indoor concentrations of CO, PAH, NO, and NO(x) were highly correlated with the corresponding outdoor concentrations. The indoor benzene concentrations showed only moderate correlation with outdoor benzene levels, possibly due to a weak indoor source. Indoor formaldehyde concentrations always exceeded outdoor levels, and the correlation between indoor and outdoor concentrations was not statistically significant, indicating the presence of a strong indoor source.
Extreme sensitivity to ultraviolet light in the fungal pathogen causing white-nose syndrome of bats.
Palmer, Jonathan M; Drees, Kevin P; Foster, Jeffrey T; Lindner, Daniel L
2018-01-02
Bat white-nose syndrome (WNS), caused by the fungal pathogen Pseudogymnoascus destructans, has decimated North American hibernating bats since its emergence in 2006. Here, we utilize comparative genomics to examine the evolutionary history of this pathogen in comparison to six closely related nonpathogenic species. P. destructans displays a large reduction in carbohydrate-utilizing enzymes (CAZymes) and in the predicted secretome (~50%), and an increase in lineage-specific genes. The pathogen has lost a key enzyme, UVE1, in the alternate excision repair (AER) pathway, which is known to contribute to repair of DNA lesions induced by ultraviolet (UV) light. Consistent with a nonfunctional AER pathway, P. destructans is extremely sensitive to UV light, as well as the DNA alkylating agent methyl methanesulfonate (MMS). The differential susceptibility of P. destructans to UV light in comparison to other hibernacula-inhabiting fungi represents a potential "Achilles' heel" of P. destructans that might be exploited for treatment of bats with WNS.
ERIC Educational Resources Information Center
Vicknair, David; Wright, Jeffrey
2015-01-01
Evidence of confusion in intermediate accounting textbooks regarding the annual percentage rate (APR) and annual effective rate (AER) is presented. The APR and AER are briefly discussed in the context of a note payable and correct formulas for computing each is provided. Representative examples of the types of confusion that we found is presented…
Nguyen, H V; Caruso, D; Lebrun, M; Nguyen, N T; Trinh, T T; Meile, J-C; Chu-Ky, S; Sarter, S
2016-08-01
The aims of this study were to characterize the antibacterial activity and the chemotype of Litsea cubeba leaf essential oil (EO) harvested in North Vietnam and to investigate the biological effects induced by the leaf powder on growth, nonspecific immunity and survival of common carp (Cyprinus carpio) challenged with Aeromonas hydrophila. The EO showed the prevalence of linalool (95%, n = 5). It was bactericidal against the majority of tested strains, with minimum inhibitory concentrations ranging from 0·72 to 2·89 mg ml(-1) (Aer. hydrophila, Edwarsiella tarda, Vibrio furnissii, Vibrio parahaemolyticus, Streptococcus garvieae, Escherichia coli, Salmonella Typhimurium). The fish was fed with 0 (control), 2, 4 and 8% leaf powder supplementation diets for 21 days. Nonspecific immunity parameters (lysozyme, haemolytic and bactericidal activities of plasma) were assessed 21 days after feeding period and before the experimental infection. Weight gain, specific growth rate and feed conversion ratio were improved by supplementation of L. cubeba in a dose-related manner, and a significant difference appeared at the highest dose (8%) when compared to the control. The increase in plasma lysozyme was significant for all the treated groups. Haemolysis activity was higher for the groups fed with 4 and 8% plant powder. Antibacterial activity increased significantly for the 8% dose only. Litsea cubeba leaf powder increased nonspecific immunity of carps in dose-related manner. After infection with Aer. hydrophila, survivals of fish fed with 4 and 8% L. cubeba doses were significantly higher than those fed with 2% dose and the control. A range of 4-8% L. cubeba leaf powder supplementation diet (from specific linalool-rich chemotype) can be used in aquaculture to reduce antibiotic burden and impacts of diseases caused by Aer. hydrophila. © 2016 The Society for Applied Microbiology.
Zhang, Jing; Lipp, Ottmar V; Hu, Ping
2017-01-01
The current study investigated the interactive effects of individual differences in automatic emotion regulation (AER) and primed emotion regulation strategy on skin conductance level (SCL) and heart rate during provoked anger. The study was a 2 × 2 [AER tendency (expression vs. control) × priming (expression vs. control)] between subject design. Participants were assigned to two groups according to their performance on an emotion regulation-IAT (differentiating automatic emotion control tendency and automatic emotion expression tendency). Then participants of the two groups were randomly assigned to two emotion regulation priming conditions (emotion control priming or emotion expression priming). Anger was provoked by blaming participants for slow performance during a subsequent backward subtraction task. In anger provocation, SCL of individuals with automatic emotion control tendencies in the control priming condition was lower than of those with automatic emotion control tendencies in the expression priming condition. However, SCL of individuals with automatic emotion expression tendencies did no differ in the automatic emotion control priming or the automatic emotion expression priming condition. Heart rate during anger provocation was higher in individuals with automatic emotion expression tendencies than in individuals with automatic emotion control tendencies regardless of priming condition. This pattern indicates an interactive effect of individual differences in AER and emotion regulation priming on SCL, which is an index of emotional arousal. Heart rate was only sensitive to the individual differences in AER, and did not reflect this interaction. This finding has implications for clinical studies of the use of emotion regulation strategy training suggesting that different practices are optimal for individuals who differ in AER tendencies.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-07
...; Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC, Eagle Creek Land Resources, LLC; Notice... 24, 2012, AER NY-Gen, LLC (transferor), Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources.... Cherry, Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC, and Eagle Creek Land Resources...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-14
... 9690-106] AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC; Eagle Creek Water Resources, LLC; Eagle Creek... Power, LLC, Eagle Creek Water Resources, LLC, and Eagle Creek Land Resources, LLC (transferees) filed an.... Paul Ho, Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC, and Eagle Creek Land Resources...
Coupling Processes Between Atmospheric Chemistry and Climate
NASA Technical Reports Server (NTRS)
Ko, Malcolm K. W.; Weisenstein, Debra; Shia, Run-Lie; Sze, N. D.
1998-01-01
The overall objective of this project is to improve the understanding of coupling processes between atmospheric chemistry and climate. Model predictions of the future distributions of trace gases in the atmosphere constitute an important component of the input necessary for quantitative assessments of global change. We will concentrate on the changes in ozone and stratospheric sulfate aerosol, with emphasis on how ozone in the lower stratosphere would respond to natural or anthropogenic changes. The key modeling tools for this work are the AER 2-dimensional chemistry-transport model, the AER 2-dimensional stratospheric sulfate model, and the AER three-wave interactive model with full chemistry. We will continue developing our three-wave model so that we can help NASA determine the strength and weakness of the next generation assessment models.
Coupling Processes Between Atmospheric Chemistry and Climate
NASA Technical Reports Server (NTRS)
Ko, M. K. W.; Weisenstein, Debra; Shia, Run-Lie; Sze, N. D.
1998-01-01
The overall objective of this project is to improve the understanding of coupling processes between atmospheric chemistry and climate. Model predictions of the future distributions of trace gases in the atmosphere constitute an important component of the input necessary for quantitative assessments of global change. We will concentrate on the changes in ozone and stratospheric sulfate aerosol, with emphasis on how ozone in the lower stratosphere would respond to natural or anthropogenic changes. The key modeling tools for this work are the AER two-dimensional chemistry-transport model, the AER two-dimensional stratospheric sulfate model, and the AER three-wave interactive model with full chemistry. We will continue developing our three-wave model so that we can help NASA determine the strength and weakness of the next generation assessment models.
Lutale, Janet Joy Kachuchuru; Thordarson, Hrafnkell; Abbas, Zulfiqarali Gulam; Vetvik, Kåre
2007-01-01
Background The prevalences and risk factors of microalbuminuria are not full described among black African diabetic patients. This study aimed at determining the prevalence of microalbuminuria among African diabetes patients in Dar es Salaam, Tanzania, and relate to socio-demographic features as well as clinical parameters. Methods Cross sectional study on 91 Type 1 and 153 Type 2 diabetic patients. Two overnight urine samples per patient were analysed. Albumin concentration was measured by an automated immunoturbidity assay. Average albumin excretion rate (AER) was used and were categorised as normalbuminuria (AER < 20 ug/min), microalbuminuria (AER 20–200 ug/min), and macroalbuminuria (AER > 200 ug/min). Information obtained also included age, diabetes duration, sex, body mass index, blood pressure, serum total cholesterol, high-density and low-density lipoprotein cholesterol, triglycerides, serum creatinine, and glycated hemoglobin A1c. Results Overall prevalence of microalbuminuria was 10.7% and macroalbuminuria 4.9%. In Type 1 patients microalbuminuria was 12% and macroalbuminuria 1%. Among Type 2 patients, 9.8% had microalbuminuria, and 7.2% had macroalbuminuria. Type 2 patients with abnormal albumin excretion rate had significantly longer diabetes duration 7.5 (0.2–24 yrs) than those with normal albumin excretion rate 3 (0–25 yrs), p < 0.001. Systolic and diastolic blood pressure among Type 2 patients with abnormal albumin excretion rate were significantly higher than in those with normal albumin excretion rate, (p < 0.001). No significant differences in body mass index, glycaemic control, and cholesterol levels was found among patients with normal compared with those with elevated albumin excretion rate either in Type 1 or Type 2 patients. A stepwise multiple linear regression analysis among Type 2 patients, revealed AER (natural log AER) as the dependent variable to be predicted by [odds ratio (95% confidence interval)] diabetes duration 0.090 (0.049, 0.131), p < 0.0001, systolic blood pressure 0.012 (0.003–0.021), p < 0.010 and serum creatinine 0.021 (0.012, 0.030). Conclusion The prevalence of micro and macroalbuminuria is higher among African Type 1 patients with relatively short diabetes duration compared with prevalences among Caucasians. In Type 2 patients, the prevalence is in accordance with findings in Caucasians. The present study detects, however, a much lower prevalence than previously demonstrated in studies from sub-Saharan Africa. Abnormal AER was significantly related to diabetes duration and systolic blood pressure. PMID:17224056
Moskvin, Oleg V; Gilles-Gonzalez, Marie-Alda; Gomelsky, Mark
2010-10-01
The SCHIC domain of the B12-binding domain family present in the Rhodobacter sphaeroides AppA protein binds heme and senses oxygen. Here we show that the predicted SCHIC domain PpaA/AerR regulators also bind heme and respond to oxygen in vitro, despite their low sequence identity with AppA.
Kern, Elizabeth O; Erhard, Penny; Sun, Wanjie; Genuth, Saul; Weiss, Miriam F
2010-01-01
Background Urinary markers were tested as predictors of macroalbuminuria or microalbuminuria in type 1 diabetes. Study Design Nested case:control of participants in the Diabetes Control and Complications Trial (DCCT) Setting & Participants Eighty-seven cases of microalbuminuria were matched to 174 controls in a 1:2 ratio, while 4 cases were matched to 4 controls in a 1:1 ratio, resulting in 91 cases and 178 controls for microalbuminuria. Fifty-five cases of macroalbuminuria were matched to 110 controls in a 1:2 ratio. Controls were free of micro/macroalbuminuria when their matching case first developed micro/macroalbuminuria. Predictors Urinary N-acetyl-β-D-glucosaminidase, pentosidine, AGE fluorescence, albumin excretion rate (AER) Outcomes Incident microalbuminuria (two consecutive annual AER > 40 but <= 300 mg/day), or macroalbuminuria (AER > 300 mg/day) Measurements Stored urine samples from DCCT entry, and 1–9 years later when macroalbuminuria or microalbuminuria occurred, were measured for the lysosomal enzyme, N-acetyl-β-D-glucosaminidase, and the advanced glycosylation end-products (AGEs) pentosidine and AGE-fluorescence. AER and adjustor variables were obtained from the DCCT. Results Sub-microalbuminuric levels of AER at baseline independently predicted microalbuminuria (adjusted OR 1.83; p<.001) and macroalbuminuria (adjusted OR 1.82; p<.001). Baseline N-acetyl-β-D-glucosaminidase independently predicted macroalbuminuria (adjusted OR 2.26; p<.001), and microalbuminuria (adjusted OR 1.86; p<.001). Baseline pentosidine predicted macroalbuminuria (adjusted OR 6.89; p=.002). Baseline AGE fluorescence predicted microalbuminuria (adjusted OR 1.68; p=.02). However, adjusted for N-acetyl-β-D-glucosaminidase, pentosidine and AGE-fluorescence lost predictive association with macroalbuminuria and microalbuminuria, respectively. Limitations Use of angiotensin converting-enzyme inhibitors was not directly ascertained, although their use was proscribed during the DCCT. Conclusions Early in type 1 diabetes, repeated measurements of AER and urinary NAG may identify individuals susceptible to future diabetic nephropathy. Combining the two markers may yield a better predictive model than either one alone. Renal tubule stress may be more severe, reflecting abnormal renal tubule processing of AGE-modified proteins, among individuals susceptible to diabetic nephropathy. PMID:20138413
Mauer, Michael; Caramori, Maria Luiza; Fioretto, Paola; Najafian, Behzad
2015-06-01
Studies of structural-functional relationships have improved understanding of the natural history of diabetic nephropathy (DN). However, in order to consider structural end points for clinical trials, the robustness of the resultant models needs to be verified. This study examined whether structural-functional relationship models derived from a large cohort of type 1 diabetic (T1D) patients with a wide range of renal function are robust. The predictability of models derived from multiple regression analysis and piecewise linear regression analysis was also compared. T1D patients (n = 161) with research renal biopsies were divided into two equal groups matched for albumin excretion rate (AER). Models to explain AER and glomerular filtration rate (GFR) by classical DN lesions in one group (T1D-model, or T1D-M) were applied to the other group (T1D-test, or T1D-T) and regression analyses were performed. T1D-M-derived models explained 70 and 63% of AER variance and 32 and 21% of GFR variance in T1D-M and T1D-T, respectively, supporting the substantial robustness of the models. Piecewise linear regression analyses substantially improved predictability of the models with 83% of AER variance and 66% of GFR variance explained by classical DN glomerular lesions alone. These studies demonstrate that DN structural-functional relationship models are robust, and if appropriate models are used, glomerular lesions alone explain a major proportion of AER and GFR variance in T1D patients. © The Author 2014. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Using AER to Improve Teacher Education
NASA Astrophysics Data System (ADS)
Ludwig, Randi R.
2013-06-01
In many ways, the astronomy education community is uniquely poised to influence pre-service and in-service teacher preparation. Astro101 courses are among those most commonly taken to satisfy general education requirements for non-science majors, including 9-25% education majors (Deming & Hufnagel, 2001; Rudolph et al. 2010). In addition, the astronomy community's numerous observatories and NASA centers engage in many efforts to satisfy demand for in-service teacher professional development (PD). These efforts represent a great laboratory in which we can apply conclusions from astronomy education research (AER) studies in particular and science education research (SER) in general. Foremost, we can work to align typical Astro101 and teacher PD content coverage to heavily hit topics in the Next Generation Science Standards (http://www.nextgenscience.org/) and utilize methods of teaching those topics that have been identified as successful in AER studies. Additionally, we can work to present teacher education using methodology that has been identified by the SER community as effective for lasting learning. In this presentation, I will highlight some of the big ideas from AER and SER that may be most useful in teacher education, many of which we implement at UT Austin in the Hands-on-Science program for pre-service teacher education and in-service teacher PD.
Novel Aeruginosin-865 from Nostoc sp. as a potent anti-inflammatory agent.
Kapuścik, Aleksandra; Hrouzek, Pavel; Kuzma, Marek; Bártová, Simona; Novák, Petr; Jokela, Jouni; Pflüger, Maren; Eger, Andreas; Hundsberger, Harald; Kopecký, Jiří
2013-11-25
Aeruginosin-865 (Aer-865), isolated from terrestrial cyanobacterium Nostoc sp. Lukešová 30/93, is the first aeruginosin-type peptide containing both a fatty acid and a carbohydrate moiety, and is the first aeruginosin to be found in the genus Nostoc. Mass spectrometry, chemical and spectroscopic analysis as well as one- and two-dimensional NMR and chiral HPLC analysis of Marfey derivatives were applied to determine the peptidic sequence: D-Hpla, D-Leu, 5-OH-Choi, Agma, with hexanoic and mannopyranosyl uronic acid moieties linked to Choi. We used an AlphaLISA assay to measure the levels of proinflammatory mediators IL-8 and ICAM-1 in hTNF-α-stimulated HLMVECs. Aer-865 showed significant reduction of both: with EC50 values of (3.5±1.5) μg mL(-1) ((4.0±1.7) μM) and (50.0±13.4) μg mL(-1) ((57.8±15.5) μM), respectively. Confocal laser scanning microscopy revealed that the anti-inflammatory effect of Aer-865 was directly associated with inhibition of NF-κB translocation to the nucleus. Moreover, Aer-865 did not show any cytotoxic effect. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yousefzadeh, Amirreza; Jablonski, Miroslaw; Iakymchuk, Taras; Linares-Barranco, Alejandro; Rosado, Alfredo; Plana, Luis A; Temple, Steve; Serrano-Gotarredona, Teresa; Furber, Steve B; Linares-Barranco, Bernabe
2017-10-01
Address event representation (AER) is a widely employed asynchronous technique for interchanging "neural spikes" between different hardware elements in neuromorphic systems. Each neuron or cell in a chip or a system is assigned an address (or ID), which is typically communicated through a high-speed digital bus, thus time-multiplexing a high number of neural connections. Conventional AER links use parallel physical wires together with a pair of handshaking signals (request and acknowledge). In this paper, we present a fully serial implementation using bidirectional SATA connectors with a pair of low-voltage differential signaling (LVDS) wires for each direction. The proposed implementation can multiplex a number of conventional parallel AER links for each physical LVDS connection. It uses flow control, clock correction, and byte alignment techniques to transmit 32-bit address events reliably over multiplexed serial connections. The setup has been tested using commercial Spartan6 FPGAs attaining a maximum event transmission speed of 75 Meps (Mega events per second) for 32-bit events at a line rate of 3.0 Gbps. Full HDL codes (vhdl/verilog) and example demonstration codes for the SpiNNaker platform will be made available.
Vilar, Santiago; Harpaz, Rave; Chase, Herbert S; Costanzi, Stefano; Rabadan, Raul
2011-01-01
Background Adverse drug events (ADE) cause considerable harm to patients, and consequently their detection is critical for patient safety. The US Food and Drug Administration maintains an adverse event reporting system (AERS) to facilitate the detection of ADE in drugs. Various data mining approaches have been developed that use AERS to detect signals identifying associations between drugs and ADE. The signals must then be monitored further by domain experts, which is a time-consuming task. Objective To develop a new methodology that combines existing data mining algorithms with chemical information by analysis of molecular fingerprints to enhance initial ADE signals generated from AERS, and to provide a decision support mechanism to facilitate the identification of novel adverse events. Results The method achieved a significant improvement in precision in identifying known ADE, and a more than twofold signal enhancement when applied to the ADE rhabdomyolysis. The simplicity of the method assists in highlighting the etiology of the ADE by identifying structurally similar drugs. A set of drugs with strong evidence from both AERS and molecular fingerprint-based modeling is constructed for further analysis. Conclusion The results demonstrate that the proposed methodology could be used as a pharmacovigilance decision support tool to facilitate ADE detection. PMID:21946238
Vlaeminck, Siegfried E; Dierick, Katleen; Boon, Nico; Verstraete, Willy
2007-07-01
Ammonium can be removed as dinitrogen gas by cooperating aerobic and anaerobic ammonium-oxidizing bacteria (AerAOB and AnAOB). The goal of this study was to verify putative mutual benefits for aggregated AerAOB and AnAOB in a stagnant freshwater environment. In an ammonium fed water column, the biological oxygen consumption rate was, on average, 76 kg O(2) ha(-1) day(-1). As the oxygen transfer rate of an abiotic control column was only 17 kg O(2) ha(-1) day(-1), biomass activity enhanced the oxygen transfer. Increasing the AnAOB gas production increased the oxygen consumption rate with more than 50% as a result of enhanced vertical movement of the biomass. The coupled decrease in dissolved oxygen concentration increased the diffusional oxygen transfer from the atmosphere in the water. Physically preventing the biomass from rising to the upper water layer instantaneously decreased oxygen and ammonium consumption and even led to the occurrence of some sulfate reduction. Floating of the biomass was further confirmed to be beneficial, as this allowed for the development of a higher AerAOB and AnAOB activity, compared to settled biomass. Overall, the results support mutual benefits for aggregated AerAOB and AnAOB, derived from the biomass uplifting effect of AnAOB gas production.
NASA Technical Reports Server (NTRS)
Barnett, Henry C; Hibbard, Robert R
1953-01-01
Since the release of the first NACA publication on fuel characteristics pertinent to the design of aircraft fuel systems (NACA-RM-E53A21), additional information has become available on MIL-F7914(AER) grade JP-5 fuel and several of the current grades of fuel oils. In order to make this information available to fuel-system designers as quickly as possible, the present report has been prepared as a supplement to NACA-RM-E53A21. Although JP-5 fuel is of greater interest in current fuel-system problems than the fuel oils, the available data are not as extensive. It is believed, however, that the limited data on JP-5 are sufficient to indicate the variations in stocks that the designer must consider under a given fuel specification. The methods used in the preparation and extrapolation of data presented in the tables and figures of this supplement are the same as those used in NACA-RM-E53A21.
Coupling Processes between Atmospheric Chemistry and Climate
NASA Technical Reports Server (NTRS)
Ko, M. K. W.; Weisenstein, Debra; Shia, Run-Lie; Sze, N. D.
1998-01-01
This is the third semi-annual report for NAS5-97039, covering January through June 1998. The overall objective of this project is to improve the understanding of coupling processes between atmospheric chemistry and climate. Model predictions of the future distributions of trace gases in the atmosphere constitute an important component of the input necessary for quantitative assessments of global change. We will concentrate on the changes in ozone and stratospheric sulfate aerosol, with emphasis on how ozone in the lower stratosphere would respond to natural or anthropogenic changes. The key modeling for this work are the AER 2-dimensional chemistry-transport model, the AER 2-dimensional stratospheric sulfate model, and the AER three-wave interactive model with full chemistry. We will continue developing our three-wave model so that we can help NASA determine the strengths and weaknesses of the next generation assessment models.
1997-06-21
Renal disease in people with insulin-dependent diabetes (IDDM) continues to pose a major health threat. Inhibitors of angiotensin-converting enzyme (ACE) slow the decline of renal function in advanced renal disease, but their effects at earlier stages are unclear, and the degree of albuminuria at which treatment should start is not known. We carried out a randomised, double-blind, placebo-controlled trial of the ACE inhibitor lisinopril in 530 men and women with IDDM aged 20-59 years with normoalbuminuria or microalbuminuria. Patients were recruited from 18 European centres, and were not on medication for hypertension. Resting blood pressure at entry was at least 75 and no more than 90 mm Hg diastolic, and no more than 155 mm Hg systolic. Urinary albumin excretion rate (AER) was centrally assessed by means of two overnight urine collections at baseline, 6, 12, 18, and 24 months. There were no difference in baseline characteristics by treatment group; mean AER was 8.0 micrograms/min in both groups; and prevalence of microalbuminuria was 13% and 17% in the placebo and lisinopril groups, respectively. On intention-to-treat analysis at 2 years, AER was 2.2 micrograms/min lower in the lisinopril than in the placebo group, a percentage difference of 18.8% (95% CI 2.0-32.7, p = 0.03), adjusted for baseline AER and centre, absolute difference 2.2 micrograms/min. In people with normoalbuminuria, the treatment difference was 1.0 microgram/min (12.7% [-2.9 to 26.0], p = 0.1). In those with microalbuminuria, however, the treatment difference was 34.2 micrograms/min (49.7% [-14.5 to 77.9], p = 0.1; for interaction, p = 0.04). For patients who completed 24 months on the trial, the final treatment difference in AER was 38.5 micrograms/min in those with microalbuminuria at baseline (p = 0.001), and 0.23 microgram/min in those with normoalbuminuria at baseline (p = 0.6). There was no treatment difference in hypoglycaemic events or in metabolic control as assessed by glycated haemoglobin. Lisinopril slows the progression of renal disease in normotensive IDDM patients with little or no albuminuria, though greatest effect was in those with microalbuminuria (AER > or = 20 micrograms/min). Our results show that lisinopril does not increase the risk of hypoglycaemic events in IDDM.
Falconar, Andrew K. I.; Martinez, Fernando
2011-01-01
Antibody-enhanced replication (AER) of dengue type-2 virus (DENV-2) strains and production of antibody-enhanced disease (AED) was tested in out-bred mice. Polyclonal antibodies (PAbs) generated against the nonstructural-1 (NS1) glycoprotein candidate vaccine of the New Guinea-C (NG-C) or NSx strains reacted strongly and weakly with these antigens, respectively. These PAbs contained the IgG2a subclass, which cross-reacted with the virion-associated envelope (E) glycoprotein of the DENV-2 NSx strain, suggesting that they could generate its AER via all mouse Fcγ-receptor classes. Indeed, when these mice were challenged with a low dose (<0.5 LD50) of the DENV-2 NSx strain, but not the NG-C strain, they all generated dramatic and lethal DENV-2 AER/AED. These AER/AED mice developed life-threatening acute respiratory distress syndrome (ARDS), displayed by diffuse alveolar damage (DAD) resulting from i) dramatic interstitial alveolar septa-thickening with mononuclear cells, ii) some hyperplasia of alveolar type-II pneumocytes, iii) copious intra-alveolar protein secretion, iv) some hyaline membrane-covered alveolar walls, and v) DENV-2 antigen-positive alveolar macrophages. These mice also developed meningo-encephalitis, with greater than 90,000-fold DENV-2 AER titers in microglial cells located throughout their brain parenchyma, some of which formed nodules around dead neurons. Their spleens contained infiltrated megakaryocytes with DENV-2 antigen-positive red-pulp macrophages, while their livers displayed extensive necrosis, apoptosis and macro- and micro-steatosis, with DENV-2 antigen-positive Kuppfer cells and hepatocytes. Their infections were confirmed by DENV-2 isolations from their lungs, spleens and livers. These findings accord with those reported in fatal human “severe dengue” cases. This DENV-2 AER/AED was blocked by high concentrations of only the NG-C NS1 glycoprotein. These results imply a potential hazard of DENV NS1 glycoprotein-based vaccines, particularly against DENV strains that contain multiple mutations or genetic recombination within or between their DENV E and NS1 glycoprotein-encoding genes. The model provides potential for assessing DENV strain pathogenicity and anti-DENV therapies in normal mice. PMID:21731643
Air exchange rates and migration of VOCs in basements and residences
Du, Liuliu; Batterman, Stuart; Godwin, Christopher; Rowe, Zachary; Chin, Jo-Yu
2015-01-01
Basements can influence indoor air quality by affecting air exchange rates (AERs) and by the presence of emission sources of volatile organic compounds (VOCs) and other pollutants. We characterized VOC levels, AERs and interzonal flows between basements and occupied spaces in 74 residences in Detroit, Michigan. Flows were measured using a steady-state multi-tracer system, and 7-day VOC measurements were collected using passive samplers in both living areas and basements. A walkthrough survey/inspection was conducted in each residence. AERs in residences and basements averaged 0.51 and 1.52 h−1, respectively, and had strong and opposite seasonal trends, e.g., AERs were highest in residences during the summer, and highest in basements during the winter. Air flows from basements to occupied spaces also varied seasonally. VOC concentration distributions were right-skewed, e.g., 90th percentile benzene, toluene, naphthalene and limonene concentrations were 4.0, 19.1, 20.3 and 51.0 μg m−3, respectively; maximum concentrations were 54, 888, 1117 and 134 μg m−3. Identified VOC sources in basements included solvents, household cleaners, air fresheners, smoking, and gasoline-powered equipment. The number and type of potential VOC sources found in basements are significant and problematic, and may warrant advisories regarding the storage and use of potentially strong VOCs sources in basements. PMID:25601281
Batterman, Stuart; Jia, Chunrong; Hatzivasilis, Gina; Godwin, Chris
2006-02-01
Air exchange rates and interzonal flows are critical ventilation parameters that affect thermal comfort, air migration, and contaminant exposure in buildings and other environments. This paper presents the development of an updated approach to measure these parameters using perfluorocarbon tracer (PFT) gases, the constant injection rate method, and adsorbent-based sampling of PFT concentrations. The design of miniature PFT sources using hexafluorotoluene and octafluorobenzene tracers, and the development and validation of an analytical GC/MS method for these tracers are described. We show that simultaneous deployment of sources and passive samplers, which is logistically advantageous, will not cause significant errors over multiday measurement periods in building, or over shorter periods in rapidly ventilated spaces like vehicle cabins. Measurement of the tracers over periods of hours to a week may be accomplished using active or passive samplers, and low method detection limits (<0.025 microg m(-3)) and high precisions (<10%) are easily achieved. The method obtains the effective air exchange rate (AER), which is relevant to characterizing long-term exposures, especially when ventilation rates are time-varying. In addition to measuring the PFT tracers, concentrations of other volatile organic compounds (VOCs) are simultaneously determined. Pilot tests in three environments (residence, garage, and vehicle cabin) demonstrate the utility of the method. The 4 day effective AER in the house was 0.20 h(-1), the 4 day AER in the attached garage was 0.80 h(-1), and 16% of the ventilation in the house migrated from the garage. The 5 h AER in a vehicle traveling at 100 km h(-1) under a low-to-medium vent condition was 92 h(-1), and this represents the highest speed test found in the literature. The method is attractive in that it simultaneously determines AERs, interzonal flows, and VOC concentrations over long and representative test periods. These measurements are practical, cost-effective, and helpful in indoor air quality and other investigations.
Plath, Johannes E; Seiberl, Wolfgang; Beitzel, Knut; Minzlaff, Philipp; Schwirtz, Ansgar; Imhoff, Andreas B; Buchmann, Stefan
2014-08-01
The purpose of this study was to investigate coactivation (CoA) testing as a clinical tool to monitor motor learning after latissimus dorsi tendon transfer. We evaluated 20 patients clinically with the American Shoulder and Elbow Surgeons (ASES) and University of California-Los Angeles (UCLA) outcomes scores, visual analog scale, active external rotation (aER), and isometric strength testing in abduction and external rotation. Measurements of aER were performed while the latissimus dorsi was activated in its new function of external rotation with concomitant activation (coactivation) of its native functions (adduction and extension). Bilateral surface electromyographic (EMG) activity was recorded during aER measurements and the strength testing procedure (EMG activity ratio: with/without CoA). Patients were divided into two groups (excellent/good vs fair/poor) according to the results of the ASES and UCLA scores. The mean follow-up was 57.8 ± 25.2 months. Subdivided by clinical scores, the superior outcome group lost aER with CoA, whereas the inferior outcome group gained aER (UCLA score: -2.2° ± 7.4° vs +4.3° ± 4.1°; P = .031). Patients with inferior outcomes in the ASES score showed higher latissimus dorsi EMG activity ratios (P = .027), suggesting an inadequate motor learning process. Isometric strength testing revealed that the latissimus dorsi transfer had significantly greater activity compared with the contralateral side (external rotation, P = .008; abduction, P = .006) but did not have comparable strength (external rotation, P = .017; abduction, P = .009). Patients with inferior clinical results were more likely to be dependent on CoA to gain external rotation. Therefore, CoA testing may be used as a tool to evaluate the status of postoperative motor learning after latissimus dorsi transfer. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seiferlein, Katherine E.
The Annual Energy Review (AER) presents the Energy Information Administration’s historical energy statistics. For many series, statistics are given for every year from 1949 through 2000. The statistics, expressed in either physical units or British thermal units, cover all major energy activities, including consumption, production, trade, stocks, and prices, for all major energy commodities, including fossil fuels, electricity, and renewable energy sources. Publication of this report is required under Public Law 95–91 (Department of Energy Organization Act), Section 205(c), and is in keeping with responsibilities given to the Energy Information Administration under Section 205(a)(2), which states: “The Administrator shall bemore » responsible for carrying out a central, comprehensive, and unified energy data and information program which will collect, evaluate, assemble, analyze, and disseminate data and information....” The AER is intended for use by Members of Congress, Federal and State agencies, energy analysts, and the general public. EIA welcomes suggestions from readers regarding data series in the AER and in other EIA publications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seiferlein, Katherine E.
1998-07-01
The Annual Energy Review (AER) presents the Energy Information Administration’s historical energy statistics. For many series, statistics are given for every year from 1949 through 1997. The statistics, expressed in either physical units or British thermal units, cover all major energy activities, including consumption, production, trade, stocks, and prices, for all major energy commodities, including fossil fuels, electricity, and renewable energy sources. Publication of this report is in keeping with responsibilities given to the Energy Information Administration (EIA) in Public Law 95–91 (Department of Energy Organization Act), which states, in part, in Section 205(a)(2) that: “The Administrator shall be responsiblemore » for carrying out a central, comprehensive, and unified energy data and information program which will collect, evaluate, assemble, analyze, and disseminate data and information....” The AER is intended for use by Members of Congress, Federal and State agencies, energy analysts, and the general public. EIA welcomes suggestions from readers regarding data series in the AER and in other EIA publications.« less
An Intercomparison of 2-D Models Within a Common Framework
NASA Technical Reports Server (NTRS)
Weisenstein, Debra K.; Ko, Malcolm K. W.; Scott, Courtney J.; Jackman, Charles H.; Fleming, Eric L.; Considine, David B.; Kinnison, Douglas E.; Connell, Peter S.; Rotman, Douglas A.; Bhartia, P. K. (Technical Monitor)
2002-01-01
A model intercomparison among the Atmospheric and Environmental Research (AER) 2-D model, the Goddard Space Flight Center (GSFC) 2-D model, and the Lawrence Livermore National Laboratory 2-D model allows us to separate differences due to model transport from those due to the model's chemical formulation. This is accomplished by constructing two hybrid models incorporating the transport parameters of the GSFC and LLNL models within the AER model framework. By comparing the results from the native models (AER and e.g. GSFC) with those from the hybrid model (e.g. AER chemistry with GSFC transport), differences due to chemistry and transport can be identified. For the analysis, we examined an inert tracer whose emission pattern is based on emission from a High Speed Civil Transport (HSCT) fleet; distributions of trace species in the 2015 atmosphere; and the response of stratospheric ozone to an HSCT fleet. Differences in NO(y) in the upper stratosphere are found between models with identical transport, implying different model representations of atmospheric chemical processes. The response of O3 concentration to HSCT aircraft emissions differs in the models from both transport-dominated differences in the HSCT-induced perturbations of H2O and NO(y) as well as from differences in the model represent at ions of O3 chemical processes. The model formulations of cold polar processes are found to be the most significant factor in creating large differences in the calculated ozone perturbations
Reyes, M; Borrás, L; Seco, A; Ferrer, J
2015-01-01
Eight different phenotypes were studied in an activated sludge process (AeR) and anaerobic digester (AnD) in a full-scale wastewater treatment plant by means of fluorescent in situ hybridization (FISH) and automated FISH quantification software. The phenotypes were ammonia-oxidizing bacteria, nitrite-oxidizing bacteria, denitrifying bacteria, phosphate-accumulating organisms (PAO), glycogen-accumulating organisms (GAO), sulphate-reducing bacteria (SRB), methanotrophic bacteria and methanogenic archaea. Some findings were unexpected: (a) Presence of PAO, GAO and denitrifiers in the AeR possibly due to unexpected environmental conditions caused by oxygen deficiencies or its ability to survive aerobically; (b) presence of SRB in the AeR due to high sulphate content of wastewater intake and possibly also due to digested sludge being recycled back into the primary clarifier; (c) presence of methanogenic archaea in the AeR, which can be explained by the recirculation of digested sludge and its ability to survive periods of high oxygen levels; (d) presence of denitrifying bacteria in the AnD which cannot be fully explained because the nitrate level in the AnD was not measured. However, other authors reported the existence of denitrifiers in environments where nitrate or oxygen was not present suggesting that denitrifiers can survive in nitrate-free anaerobic environments by carrying out low-level fermentation; (e) the results of this paper are relevant because of the focus on the identification of nearly all the significant bacterial and archaeal groups of microorganisms with a known phenotype involved in the biological wastewater treatment.
Energetic Profile of the Basketball Exercise Simulation Test in Junior Elite Players.
Latzel, Richard; Hoos, Olaf; Stier, Sebastian; Kaufmann, Sebastian; Fresz, Volker; Reim, Dominik; Beneke, Ralph
2017-11-28
To analyze the energetic profile of the basketball exercise simulation test (BEST). 10 male elite junior basketball players (age: 15.5±0.6yrs, height: 180±9cm, body mass: 66.1±11.2kg) performed a modified BEST (20 circuits consisting of jumping, sprinting, jogging, shuffling, and short breaks) simulating professional basketball game play. Circuit time, sprint time, sprint decrement, oxygen uptake (VO2), heart rate (HR), and blood lactate concentration (BLC) were obtained. Metabolic energy and metabolic power above rest (W tot , P tot ) as well as energy share in terms of aerobic (W aer ), glycolytic (W blc ), and high energy phosphates (W PCr ) were calculated from VO2 during exercise, net lactate production, and the fast component of post-exercise VO2 kinetics, respectively. W aer , W blc , and W PCr reflect 89±2%, 5±1%, and 6±1% of total energy needed, respectively. Assuming an aerobic replenishment of PCr energy stores during short breaks, the adjusted energy share yielded W aer : 66±4%, W blc : 5±1%, and W PCr : 29±1%. W aer and W PCr were negatively correlated (-0.72, -0.59) with sprint time, which was not the case for W blc . Consistent with general findings on energy system interaction during repeated high intensity exercise bouts, the intermittent profile of the BEST relies primarily on aerobic energy combined with repetitive supplementation by anaerobic utilization of high energy phosphates.
Simple Benchmark Specifications for Space Radiation Protection
NASA Technical Reports Server (NTRS)
Singleterry, Robert C. Jr.; Aghara, Sukesh K.
2013-01-01
This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.
Norsker, Filippa Nyboe; Rechnitzer, Catherine; Cederkvist, Luise; Tryggvadottir, Laufey; Madanat-Harjuoja, Laura-Maria; Øra, Ingrid; Thorarinsdottir, Halldora K; Vettenranta, Kim; Bautz, Andrea; Schrøder, Henrik; Hasle, Henrik; Winther, Jeanette Falck
2018-06-21
Because of the rarity of neuroblastoma and poor survival until the 1990s, information on late effects in neuroblastoma survivors is sparse. We comprehensively reviewed the long-term risk for somatic disease in neuroblastoma survivors. We identified 721 5-year survivors of neuroblastoma in Nordic population-based cancer registries and identified late effects in national hospital registries covering the period 1977-2012. Detailed treatment information was available for 46% of the survivors. The disease-specific rates of hospitalization of survivors and of 152,231 randomly selected population comparisons were used to calculate standardized hospitalization rate ratios (SHRRs) and absolute excess risks (AERs). During 5,500 person-years of follow-up, 501 5-year survivors had a first hospital contact yielding a SHRR of 2.3 (95% CI 2.1-2.6) and a corresponding AER of 52 (95% CI 44-60) per 1,000 person-years. The highest relative risks were for diseases of blood and blood-forming organs (SHRR 3.8; 95% CI 2.7-5.4), endocrine diseases (3.6 [3.1-4.2]), circulatory system diseases (3.1 [2.5-3.8]), and diseases of the nervous system (3.0 [2.6-3.3]). Approximately 60% of the excess new hospitalizations of survivors were for diseases of the nervous system, urinary system, endocrine system, and bone and soft tissue. The relative risks and AERs were highest for the survivors most intensively treated. Survivors of neuroblastoma have a highly increased long-term risk for somatic late effects in all the main disease groups as compared with background levels. Our results are useful for counseling survivors and should contribute to improving health care planning in post-therapy clinics. This article is protected by copyright. All rights reserved. © 2018 UICC.
Improving the toughness of ultrahigh strength steel
NASA Astrophysics Data System (ADS)
Sato, Koji
2002-01-01
The ideal structural steel combines high strength with high fracture toughness. This dissertation discusses the toughening mechanism of the Fe/Co/Ni/Cr/Mo/C steel, AerMet 100, which has the highest toughness/strength combination among all commercial ultrahigh strength steels. The possibility of improving the toughness of this steel was examined by considering several relevant factors. Chapter 1 reviews the mechanical properties of ultrahigh strength steels and the physical metallurgy of AerMet 100. It also describes the fracture mechanisms of steel, i.e. ductile microvoid coalescence, brittle transgranular cleavage, and intergranular separation. Chapter 2 examines the strength-toughness relationship for three heats of AerMet 100. A wide variation of toughness is obtained at the same strength level. The toughness varies despite the fact that all heat fracture in the ductile fracture mode. The difference originates from the inclusion content. Lower inclusion volume fraction and larger inclusion spacing gives rise to a greater void growth factor and subsequently a higher fracture toughness. The fracture toughness value, JIc, is proportional to the particle spacing of the large non-metallic inclusions. Chapter 3 examines the ductile-brittle transition of AerMet 100 and the effect of a higher austenitization temperature, using the Charpy V-notch test. The standard heat treatment condition of AerMet 100 shows a gradual ductile-brittle transition due to its fine effective grain size. Austenitization at higher temperature increases the prior austenite grain size and packet size, leading to a steeper transition at a higher temperature. Both transgranular cleavage and intergranular separation are observed in the brittle fracture mode. Chapter 4 examines the effect of inclusion content, prior austenite grain size, and the amount of austenite on the strength-toughness relationship. The highest toughness is achieved by low inclusion content, small prior austenite grain size, and a small content of stable austenite. The low inclusion content increases the strain at the fracture. The reduction in prior austenite grain size prevents the fast unstable crack propagation by cleavage. And the stable austenite decreases the strength of the intergranular separation at the prior austenite grain boundary, which provides the stress relief at the crack tip.
Harpaz, Rave; Vilar, Santiago; DuMouchel, William; Salmasian, Hojjat; Haerian, Krystl; Shah, Nigam H; Chase, Herbert S; Friedman, Carol
2013-01-01
Objective Data-mining algorithms that can produce accurate signals of potentially novel adverse drug reactions (ADRs) are a central component of pharmacovigilance. We propose a signal-detection strategy that combines the adverse event reporting system (AERS) of the Food and Drug Administration and electronic health records (EHRs) by requiring signaling in both sources. We claim that this approach leads to improved accuracy of signal detection when the goal is to produce a highly selective ranked set of candidate ADRs. Materials and methods Our investigation was based on over 4 million AERS reports and information extracted from 1.2 million EHR narratives. Well-established methodologies were used to generate signals from each source. The study focused on ADRs related to three high-profile serious adverse reactions. A reference standard of over 600 established and plausible ADRs was created and used to evaluate the proposed approach against a comparator. Results The combined signaling system achieved a statistically significant large improvement over AERS (baseline) in the precision of top ranked signals. The average improvement ranged from 31% to almost threefold for different evaluation categories. Using this system, we identified a new association between the agent, rasburicase, and the adverse event, acute pancreatitis, which was supported by clinical review. Conclusions The results provide promising initial evidence that combining AERS with EHRs via the framework of replicated signaling can improve the accuracy of signal detection for certain operating scenarios. The use of additional EHR data is required to further evaluate the capacity and limits of this system and to extend the generalizability of these results. PMID:23118093
NASA Astrophysics Data System (ADS)
Youssefi, Somayeh; Waring, Michael S.
2015-07-01
The ozonolysis of reactive organic gases (ROG), e.g. terpenes, generates secondary organic aerosol (SOA) indoors. The SOA formation strength of such reactions is parameterized by the aerosol mass fraction (AMF), a.k.a. SOA yield, which is the mass ratio of generated SOA to oxidized ROG. AMFs vary in magnitude both among and for individual ROGs. Here, we quantified dynamic SOA formation from the ozonolysis of α-pinene with 'transient AMFs,' which describe SOA formation due to pulse emission of a ROG in an indoor space with air exchange, as is common when consumer products are intermittently used in ventilated buildings. We performed 19 experiments at low, moderate, and high (0.30, 0.52, and 0.94 h-1, respectively) air exchange rates (AER) at varying concentrations of initial reactants. Transient AMFs as a function of peak SOA concentrations ranged from 0.071 to 0.25, and they tended to increase as the AER and product of the initial reactant concentrations increased. Compared to our similar research on limonene ozonolysis (Youssefi and Waring, 2014), for which formation strength was driven by secondary ozone reactions, the AER impact for α-pinene was opposite in direction and weaker, while the initial reactant product impact was in the same direction but stronger for α-pinene than for limonene. Linear fits of AMFs for α-pinene ozonolysis as a function of the AER and initial reactant concentrations are provided so that future indoor models can predict SOA formation strength.
NASA Astrophysics Data System (ADS)
Tokura, Norihito; Yamamoto, Takao; Kato, Hisato; Nakagawa, Akio
We have studied the dynamic avalanche phenomenon in an SOI lateral diode during reverse recovery by using a mixed-mode device simulation. In the study, it has been found that local impact ionization occurs near an anode-side field oxide edge, where a high-density hole current flows and a high electric field appears simultaneously. We propose that a p-type anode extension region (AER) along a trench side wall effectively sweeps out stored carriers beneath an anode p-diffusion layer during reverse recovery, resulting in reduction of the electric field and remarkable suppression of the dynamic avalanche. The AER reduces the total recovery charge and does not cause any increase in the total stored charge under a forward bias operation. This effect is verified experimentally by the fabricated device with AER. Thus, the developed SOI lateral diode is promising as a high-speed and highly rugged free-wheeling diode, which can be integrated into next-generation SOI microinverters.
Asarnow, R F; Cromwell, R L; Rennick, P M
1978-10-01
Twenty-four male schizophrenics, 12 (SFH) with schizophrenia in the immediate family and 12 (SNFH) with no evidence of schizophrenia in the family background, and 24 male control subjects, 12 highly educated (HEC), and 12 minimally educated (MEC), were assessed for premorbid social adjustment and were administered the Digit Symbol Substitution Test, a size estimation task, and the EEG average evoked response (AER) at different levels of stimulus intensity. As predicted from the stimulus redundancy formulation, the SFH patients were poorer in premorbid adjustment, were less often paranoid, functioned at a lower level of cognitive efficiency (poor digit symbol and greater absolute error on size estimation), were more chronic, and, in some respects, had size estimation indices of minimal scanning. Contrary to prediction, the SFH group had the strongest and most sustained augmenting response on AER, while the SNFH group shifted from an augmenting to a reducing pattern of response. The relationship between an absence of AER reducing and the presence of cognitive impairment in the SFH group was a major focus of discussion.
Jiang, Guoqian; Wang, Liwei; Liu, Hongfang; Solbrig, Harold R; Chute, Christopher G
2013-01-01
A semantically coded knowledge base of adverse drug events (ADEs) with severity information is critical for clinical decision support systems and translational research applications. However it remains challenging to measure and identify the severity information of ADEs. The objective of the study is to develop and evaluate a semantic web based approach for building a knowledge base of severe ADEs based on the FDA Adverse Event Reporting System (AERS) reporting data. We utilized a normalized AERS reporting dataset and extracted putative drug-ADE pairs and their associated outcome codes in the domain of cardiac disorders. We validated the drug-ADE associations using ADE datasets from SIDe Effect Resource (SIDER) and the UMLS. We leveraged the Common Terminology Criteria for Adverse Event (CTCAE) grading system and classified the ADEs into the CTCAE in the Web Ontology Language (OWL). We identified and validated 2,444 unique Drug-ADE pairs in the domain of cardiac disorders, of which 760 pairs are in Grade 5, 775 pairs in Grade 4 and 2,196 pairs in Grade 3.
NASA Astrophysics Data System (ADS)
Yin, Xiang-Chu; Yu, Huai-Zhong; Kukshenko, Victor; Xu, Zhao-Yong; Wu, Zhishen; Li, Min; Peng, Keyin; Elizarov, Surgey; Li, Qi
2004-12-01
In order to verify some precursors such as LURR (Load/Unload Response Ratio) and AER (Accelerating Energy Release) before large earthquakes or macro-fracture in heterogeneous brittle media, four acoustic emission experiments involving large rock specimens under tri-axial stress, have been conducted. The specimens were loaded in two ways: monotonous or cycling. The experimental results confirm that LURR and AER are precursors of macro-fracture in brittle media. A new measure called the state vector has been proposed to describe the damage evolution of loaded rock specimens.
Neuron array with plastic synapses and programmable dendrites.
Ramakrishnan, Shubha; Wunderlich, Richard; Hasler, Jennifer; George, Suma
2013-10-01
We describe a novel neuromorphic chip architecture that models neurons for efficient computation. Traditional architectures of neuron array chips consist of large scale systems that are interfaced with AER for implementing intra- or inter-chip connectivity. We present a chip that uses AER for inter-chip communication but uses fast, reconfigurable FPGA-style routing with local memory for intra-chip connectivity. We model neurons with biologically realistic channel models, synapses and dendrites. This chip is suitable for small-scale network simulations and can also be used for sequence detection, utilizing directional selectivity properties of dendrites, ultimately for use in word recognition.
REFSIM Handbook of Variable Names.
1982-11-04
INT2 LREFSEEK R /AERO/ INT4 L: REFSEEK D2THET missile pitch acceleration in degrees/R /AERO/ AER02 M CREFAIR second**2. R /AERO/ AER03 M LREFAIR R...LREFSEEK R /INTERP/ AWM’l L REFSEEK R /INTERP/ ANMh2 CREFSEEK DELA Peak magnitude difference at port and R /CSAS/ AMERCS LREFENW4T starboard. (db/m**2) R...ASE/ GLINT2 L REFD1VMT R /ASE/ INIiT M LIREFSEEK, R /ASE,/ INT2 REFSEEK R /ASE/ INT4 L-RE’SEEK R /ASE/ LOCK2 L-EFSEEK R /ASE/ 1’tLOCK LIREFSEEK DELUhP
Planning and Execution for an Autonomous Aerobot
NASA Technical Reports Server (NTRS)
Gaines, Daniel M.; Estlin, Tara A.; Schaffer, Steven R.; Chouinard, Caroline M.
2010-01-01
The Aerial Onboard Autonomous Science Investigation System (AerOASIS) system provides autonomous planning and execution capabilities for aerial vehicles (see figure). The system is capable of generating high-quality operations plans that integrate observation requests from ground planning teams, as well as opportunistic science events detected onboard the vehicle while respecting mission and resource constraints. AerOASIS allows an airborne planetary exploration vehicle to summarize and prioritize the most scientifically relevant data; identify and select high-value science sites for additional investigation; and dynamically plan, schedule, and monitor the various science activities being performed, even during extended communications blackout periods with Earth.
Event generators for address event representation transmitters
NASA Astrophysics Data System (ADS)
Serrano-Gotarredona, Rafael; Serrano-Gotarredona, Teresa; Linares Barranco, Bernabe
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate 'events' according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. In a typical AER transmitter chip, there is an array of neurons that generate events. They send events to a peripheral circuitry (let's call it "AER Generator") that transforms those events to neurons coordinates (addresses) which are put sequentially on an interchip high speed digital bus. This bus includes a parallel multi-bit address word plus a Rqst (request) and Ack (acknowledge) handshaking signals for asynchronous data exchange. There have been two main approaches published in the literature for implementing such "AER Generator" circuits. They differ on the way of handling event collisions coming from the array of neurons. One approach is based on detecting and discarding collisions, while the other incorporates arbitration for sequencing colliding events . The first approach is supposed to be simpler and faster, while the second is able to handle much higher event traffic. In this article we will concentrate on the second arbiter-based approach. Boahen has been publishing several techniques for implementing and improving the arbiter based approach. Originally, he proposed an arbitration squeme by rows, followed by a column arbitration. In this scheme, while one neuron was selected by the arbiters to transmit his event out of the chip, the rest of neurons in the array were freezed to transmit any further events during this time window. This limited the maximum transmission speed. In order to improve this speed, Boahen proposed an improved 'burst mode' scheme. In this scheme after the row arbitration, a complete row of events is pipelined out of the array and arbitered out of the chip at higher speed. During this single row event arbitration, the array is free to generate new events and communicate to the row arbiter, in a pipelined mode. This scheme significantly improves maximum event transmission speed, specially for high traffic situations were speed is more critical. We have analyzed and studied this approach and have detected some shortcomings in the circuits reported by Boahen, which may render some false situations under some statistical conditions. The present paper proposes some improvements to overcome such situations. The improved "AER Generator" has been implemented in an AER transmitter system
Incorporating High-Throughput Exposure Predictions with ...
We previously integrated dosimetry and exposure with high-throughput screening (HTS) to enhance the utility of ToxCast™ HTS data by translating in vitro bioactivity concentrations to oral equivalent doses (OEDs) required to achieve these levels internally. These OEDs were compared against regulatory exposure estimates, providing an activity-to-exposure ratio (AER) useful for a risk-based ranking strategy. As ToxCast™ efforts expand (i.e., Phase II) beyond food-use pesticides towards a wider chemical domain that lacks exposure and toxicity information, prediction tools become increasingly important. In this study, in vitro hepatic clearance and plasma protein binding were measured to estimate OEDs for a subset of Phase II chemicals. OEDs were compared against high-throughput (HT) exposure predictions generated using probabilistic modeling and Bayesian approaches generated by the U.S. EPA ExpoCast™ program. This approach incorporated chemical-specific use and national production volume data with biomonitoring data to inform the exposure predictions. This HT exposure modeling approach provided predictions for all Phase II chemicals assessed in this study whereas estimates from regulatory sources were available for only 7% of chemicals. Of the 163 chemicals assessed in this study, three or 13 chemicals possessed AERs <1 or <100, respectively. Diverse bioactivities y across a range of assays and concentrations was also noted across the wider chemical space su
The General Concept of Benchmarking and Its Application in Higher Education in Europe
ERIC Educational Resources Information Center
Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna
2009-01-01
The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…
Bacchi, Elisabetta; Negri, Carlo; Targher, Giovanni; Faccioli, Niccolò; Lanza, Massimo; Zoppini, Giacomo; Zanolin, Elisabetta; Schena, Federico; Bonora, Enzo; Moghetti, Paolo
2013-10-01
Although lifestyle interventions are considered the first-line therapy for nonalcoholic fatty liver disease (NAFLD), which is extremely common in people with type 2 diabetes, no intervention studies have compared the effects of aerobic (AER) or resistance (RES) training on hepatic fat content in type 2 diabetic subjects with NAFLD. In this randomized controlled trial, we compared the 4-month effects of either AER or RES training on insulin sensitivity (by hyperinsulinemic euglycemic clamp), body composition (by dual-energy X-ray absorptiometry), as well as hepatic fat content and visceral (VAT), superficial (SSAT), and deep (DSAT) subcutaneous abdominal adipose tissue (all quantified by an in-opposed-phase magnetic resonance imaging technique) in 31 sedentary adults with type 2 diabetes and NAFLD. After training, hepatic fat content was markedly reduced (P < 0.001), to a similar extent, in both the AER and the RES training groups (mean relative reduction from baseline [95% confidence interval] -32.8% [-58.20 to -7.52] versus -25.9% [-50.92 to -0.94], respectively). Additionally, hepatic steatosis (defined as hepatic fat content >5.56%) disappeared in about one-quarter of the patients in each intervention group (23.1% in the AER group and 23.5% in the RES group). Insulin sensitivity during euglycemic clamp was increased, whereas total body fat mass, VAT, SSAT, and hemoglobin A1c were reduced comparably in both intervention groups. This is the first randomized controlled study to demonstrate that resistance training and aerobic training are equally effective in reducing hepatic fat content among type 2 diabetic patients with NAFLD. Copyright © 2013 by the American Association for the Study of Liver Diseases.
Sakaeda, Toshiyuki; Kadoyama, Kaori; Okuno, Yasushi
2011-01-01
Adverse event reports (AERs) submitted to the US Food and Drug Administration (FDA) were reviewed to assess the muscular and renal adverse events induced by the administration of 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase inhibitors (statins) and to attempt to determine the rank-order of the association. After a revision of arbitrary drug names and the deletion of duplicated submissions, AERs involving pravastatin, simvastatin, atorvastatin, or rosuvastatin were analyzed. Authorized pharmacovigilance tools were used for quantitative detection of signals, i.e., drug-associated adverse events, including the proportional reporting ratio, the reporting odds ratio, the information component given by a Bayesian confidence propagation neural network, and the empirical Bayes geometric mean. Myalgia, rhabdomyolysis and an increase in creatine phosphokinase level were focused on as the muscular adverse events, and acute renal failure, non-acute renal failure, and an increase in blood creatinine level as the renal adverse events. Based on 1,644,220 AERs from 2004 to 2009, signals were detected for 4 statins with respect to myalgia, rhabdomyolysis, and an increase in creatine phosphokinase level, but these signals were stronger for rosuvastatin than pravastatin and atorvastatin. Signals were also detected for acute renal failure, though in the case of atorvastatin, the association was marginal, and furthermore, a signal was not detected for non-acute renal failure or for an increase in blood creatinine level. Data mining of the FDA's adverse event reporting system, AERS, is useful for examining statin-associated muscular and renal adverse events. The data strongly suggest the necessity of well-organized clinical studies with respect to statin-associated adverse events.
A digital pixel cell for address event representation image convolution processing
NASA Astrophysics Data System (ADS)
Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.
Effect of temperature on the standard metabolic rates of juvenile and adult Exopalaemon carinicauda
NASA Astrophysics Data System (ADS)
Zhang, Chengsong; Li, Fuhua; Xiang, Jianhai
2015-03-01
Ridgetail white prawn ( Exopalaemon carinicauda) are of significant economic importance in China where they are widely cultured. However, there is little information on the basic biology of this species. We evaluated the effect of temperature (16, 19, 22, 25, 28, 31, and 34°C) on the standard metabolic rates (SMRs) of juvenile and adult E. carinicauda in the laboratory under static conditions. The oxygen consumption rate (OCR), ammonia-N excretion rate (AER), and atomic ratio of oxygen consumed to nitrogen consumed (O:N ratio) of juvenile and adult E. carinicauda were significantly influenced by temperature ( P < 0.05). Both the OCR and AER of juveniles increased significantly with increasing temperature from 16 to 34°C, but the maximum OCR for adults was at 31°C. Juvenile shrimp exhibited a higher OCR than the adults from 19 to 34°C. There was no significant difference between the AERs of the two life-stages from 16 to 31°C ( P >0.05). The O:N ratio in juveniles was significantly higher than that in the adults over the entire temperature range ( P <0.05). The temperature coefficient ( Q 10) of OCR and AER ranged from 5.03 to 0.86 and 6.30 to 0.85 for the adults, respectively, and from 6.09-1.03 and 3.66-1.80 for the juveniles, respectively. The optimal temperature range for growth of the juvenile and adult shrimp was from 28 to 31°C, based on Q 10 and SMR values. Results from the present study may be used to guide pond culture production of E. carinicauda.
NASA Technical Reports Server (NTRS)
Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)
1993-01-01
A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.
Pramanik, Biplob Kumar; Pramanik, Sagor Kumar; Sarker, Dipok Chandra; Suja, Fatihah
2017-06-01
The effects of ozonation, anion exchange resin (AER) and UV/H 2 O 2 were investigated as a pre-treatment to control organic fouling (OF) of ultrafiltration membrane in the treatment of drinking water. It was found that high molecular weight (MW) organics such as protein and polysaccharide substances were majorly responsible for reversible fouling which contributed to 90% of total fouling. The decline rate increased with successive filtration cycles due to deposition of protein content over time. All pre-treatment could reduce the foulants of a Ultrafiltration membrane which contributed to the improvement in flux, and there was a greater improvement of flux by UV/H 2 O 2 (61%) than ozonation (43%) which in turn was greater than AER (23%) treatment. This was likely due to the effective removal/breakdown of high MW organic content. AER gave greater removal of biofouling potential components (such as biodegradable dissolved organic carbon and assimilable organic carbon contents) compared to UV/H 2 O 2 and ozonation treatment. Overall, this study demonstrated the potential of pre-treatments for reducing OF of ultrafiltration for the treatment of drinking water.
Embryology meets molecular biology: Deciphering the apical ectodermal ridge.
Verheyden, Jamie M; Sun, Xin
2017-09-15
More than sixty years ago, while studying feather tracks on the shoulder of the chick embryo, Dr. John Saunders used Nile Blue dye to stain the tissue. There, he noticed a darkly stained line of cells that neatly rims the tip of the growing limb bud. Rather than ignoring this observation, he followed it up by removing this tissue and found that it led to a striking truncation of the limb skeletons. This landmark experiment marks the serendipitous discovery of the apical ectodermal ridge (AER), the quintessential embryonic structure that drives the outgrowth of the limb. Dr. Saunders continued to lead the limb field for the next fifty years, not just through his own work, but also by inspiring the next generation of researchers through his infectious love of science. Together, he and those who followed ushered in the discovery of fibroblast growth factor (FGF) as the AER molecule. The seamless marriage of embryology and molecular biology that led to the decoding of the AER serves as a shining example of how discoveries are made for the rest of the developmental biology field. Copyright © 2017 Elsevier Inc. All rights reserved.
The effect of obesity and type 1 diabetes on renal function in children and adolescents.
Franchini, Simone; Savino, Alessandra; Marcovecchio, M Loredana; Tumini, Stefano; Chiarelli, Francesco; Mohn, Angelika
2015-09-01
Early signs of renal complications can be common in youths with type 1 diabetes (T1D). Recently, there has been an increasing interest in potential renal complications associated with obesity, paralleling the epidemics of this condition, although there are limited data in children. Obese children and adolescents present signs of early alterations in renal function similar to non-obese peers with T1D. Eighty-three obese (age: 11.6 ± 3.0 yr), 164 non-obese T1D (age: 12.4 ± 3.2 yr), and 71 non-obese control (age: 12.3 ± 3.2 yr) children and adolescents were enrolled in the study. Anthropometric parameters and blood pressure were measured. Renal function was assessed by albumin excretion rate (AER), serum cystatin C, creatinine and estimated glomerular filtration rate (e-GFR), calculated using the Bouvet's formula. Obese and non-obese T1D youths had similar AER [8.9(5.9-10.8) vs. 8.7(5.9-13.1) µg/min] and e-GFR levels (114.8 ± 19.6 vs. 113.4 ± 19.1 mL/min), which were higher than in controls [AER: 8.1(5.9-8.7) µg/min, e-GFR: 104.7 ± 18.9 mL/min]. Prevalence of microalbuminuria and hyperfiltration was similar between obese and T1D youths and higher than their control peers (6.0 vs. 8.0 vs. 0%, p = 0.02; 15.9 vs. 15.9 vs. 4.3%, p = 0.03, respectively). Body mass index (BMI) z-score was independently related to e-GFR (r = 0.328; p < 0.001), and AER (r = 0.138; p = 0.017). Hemoglobin A1c (HbA1c) correlated with AER (r = 0.148; p = 0.007) but not with eGFR (r = 0.041; p = 0.310). Obese children and adolescents show early alterations in renal function, compared to normal weight peers, and they have similar renal profiles than age-matched peers with T1D. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Roux, Perrine; Rojas Castro, Daniela; Ndiaye, Khadim; Debrus, Marie; Protopopescu, Camélia; Le Gall, Jean-Marie; Haas, Aurélie; Mora, Marion; Spire, Bruno; Suzan-Monti, Marie; Carrieri, Patrizia
2016-01-01
Aims The community-based AERLI intervention provided training and education to people who inject drugs (PWID) about HIV and HCV transmission risk reduction, with a focus on drug injecting practices, other injection-related complications, and access to HIV and HCV testing and care. We hypothesized that in such a population where HCV prevalence is very high and where few know their HCV serostatus, AERLI would lead to increased HCV testing. Methods The national multisite intervention study ANRS-AERLI consisted in assessing the impact of an injection-centered face-to-face educational session offered in volunteer harm reduction (HR) centers (“with intervention”) compared with standard HR centers (“without intervention”). The study included 271 PWID interviewed on three occasions: enrolment, 6 and 12 months. Participants in the intervention group received at least one face-to-face educational session during the first 6 months. Measurements The primary outcome of this analysis was reporting to have been tested for HCV during the previous 6 months. Statistical analyses used a two-step Heckman approach to account for bias arising from the non-randomized clustering design. This approach identified factors associated with HCV testing during the previous 6 months. Findings Of the 271 participants, 127 and 144 were enrolled in the control and intervention groups, respectively. Of the latter, 113 received at least one educational session. For the present analysis, we selected 114 and 88 participants eligible for HCV testing in the control and intervention groups, respectively. In the intervention group, 44% of participants reported having being tested for HCV during the previous 6 months at enrolment and 85% at 6 months or 12 months. In the control group, these percentages were 51% at enrolment and 78% at 12 months. Multivariable analyses showed that participants who received at least one educational session during follow-up were more likely to report HCV testing, compared with those who did not receive any intervention (95%[CI] = 4.13[1.03;16.60]). Conclusion The educational intervention AERLI had already shown efficiency in reducing HCV at-risk practices and associated cutaneous complications and also seems to have a positive impact in increasing HCV testing in PWID. PMID:27294271
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandor, Debra; Chung, Donald; Keyser, David
This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.
NASA Technical Reports Server (NTRS)
Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.
1991-01-01
A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.
ERIC Educational Resources Information Center
McGregor, Ellen N.; Attinasi, Louis C., Jr.
This paper describes the processes involved in selecting peer institutions for appropriate benchmarking using national databases (NCES-IPEDS). Benchmarking involves the identification of peer institutions and/or best practices in specific operational areas for the purpose of developing standards. The benchmarking process was borne in the early…
How to Advance TPC Benchmarks with Dependability Aspects
NASA Astrophysics Data System (ADS)
Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco
Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.
Evaluation of control strategies using an oxidation ditch benchmark.
Abusam, A; Keesman, K J; Spanjers, H; van, Straten G; Meinema, K
2002-01-01
This paper presents validation and implementation results of a benchmark developed for a specific full-scale oxidation ditch wastewater treatment plant. A benchmark is a standard simulation procedure that can be used as a tool in evaluating various control strategies proposed for wastewater treatment plants. It is based on model and performance criteria development. Testing of this benchmark, by comparing benchmark predictions to real measurements of the electrical energy consumptions and amounts of disposed sludge for a specific oxidation ditch WWTP, has shown that it can (reasonably) be used for evaluating the performance of this WWTP. Subsequently, the validated benchmark was then used in evaluating some basic and advanced control strategies. Some of the interesting results obtained are the following: (i) influent flow splitting ratio, between the first and the fourth aerated compartments of the ditch, has no significant effect on the TN concentrations in the effluent, and (ii) for evaluation of long-term control strategies, future benchmarks need to be able to assess settlers' performance.
Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0
NASA Technical Reports Server (NTRS)
Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine
2004-01-01
We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.
HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Mark; Brown, Jed; Shalf, John
2014-05-05
This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.
Zheng, Shaokui; Li, Xiaofeng; Zhang, Xueyu; Wang, Wei; Yuan, Shengliu
2017-09-01
This study investigated the potential effect of four frequently used inorganic regenerant properties (i.e., ionic strength, cation type, anion type, and regeneration solution volume) on the desorption and adsorption performance of 14 pharmaceuticals, belonging to 12 therapeutic classes with different predominant chemical forms and hydrophobicities, using polymeric anion exchange resin (AER)-packed fixed-bed column tests. After preconditioning with NaCl, NaOH, or saline-alkaline (SA) solutions, all resulting mobile counterion types of AERs effectively adsorbed all 14 pharmaceuticals, where the preferential magnitude of OH - -type = Cl - + OH - -type > Cl - -type. During regeneration, ionic strength (1 M versus 3 M NaCl) had no significant influence on desorption performance for any of the 14 pharmaceuticals, while no regenerant cation (HCl versus NaCl) or anion type (NaCl versus NaOH and SA) achieved higher desorption efficiencies for all pharmaceuticals. A volumetric increase in 1 M or 3 M NaCl solutions significantly improved the desorption efficiencies of most pharmaceuticals, irrespective of ionic strength. The results indicate that regeneration protocols, including regenerant cation type, anion type and volume, should be optimized to improve pharmaceutical removal by AERs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Coupling Processes between Atmospheric Chemistry and Climate
NASA Technical Reports Server (NTRS)
Ko, Malcolm K. W.; Weisenstein, Debra K.; Shia, Run-Lie; Scott, Courtney J.; Sze, Nien Dak
1998-01-01
This is the fourth semi-annual report for NAS5-97039, covering the time period July through December 1998. The overall objective of this project is to improve the understanding of coupling processes between atmospheric chemistry and climate. Model predictions of the future distributions of trace gases in the atmosphere constitute an important component of the input necessary for quantitative assessments of global change. We will concentrate on the changes in ozone and stratospheric sulfate aerosol, with emphasis on how ozone in the lower stratosphere would respond to natural or anthropogenic changes. The key modeling tools for this work are the Atmospheric and Environmental Research (AER) two-dimensional chemistry-transport model, the AER two-dimensional stratospheric sulfate model, and the AER three-wave interactive model with full chemistry. For this six month period, we report on a modeling study of new rate constant which modify the NOx/NOy ratio in the lower stratosphere; sensitivity to changes in stratospheric water vapor in the future atmosphere; a study of N2O and CH4 observations which has allowed us to adjust diffusion in the 2-D CTM in order to obtain appropriate polar vortex isolation; a study of SF6 and age of air with comparisons of models and measurements; and a report on the Models and Measurements II effort.
Edwards, Beatrice J.; Usmani, Sarah; Raisch, Dennis W.; McKoy, June M.; Samaras, Athena T.; Belknap, Steven M.; Trifilio, Steven M.; Hahr, Allison; Bunta, Andrew D.; Abu-Alfa, Ali; Langman, Craig B.; Rosen, Steve T.; West, Dennis P.
2013-01-01
Purpose: To determine whether acute kidney injury (AKI) is identified within the US Food and Drug Administration's Adverse Events and Reporting System (FDA AERS) as an adverse event resulting from bisphosphonate (BP) use in cancer therapy. Methods: A search of the FDA AERS records from January 1998 through June 2009 was performed; search terms were “renal problems” and all drug names for BPs. The search resulted in 2,091 reports. We analyzed for signals of disproportional association by calculating the proportional reporting ratio for zoledronic acid (ZOL) and pamidronate. Literature review of BP-associated renal injury within the cancer setting was conducted. Results: Four hundred eighty cases of BP-associated acute kidney injury (AKI) were identified in patients with cancer. Two hundred ninety-eight patients (56%) were female; mean age was 66 ± 10 years. Multiple myeloma (n = 220, 46%), breast cancer (n = 98, 20%), and prostate cancer (n = 24, 5%) were identified. Agents included ZOL (n = 411, 87.5%), pamidronate (n = 8, 17%), and alendronate (n = 36, 2%). Outcomes included hospitalization (n = 304, 63.3%) and death (n = 68, 14%). The proportional reporting ratio for ZOL was 1.22 (95% CI, 1.13 to 1.32) and for pamidronate was 1.55 (95% CI, 1.25 to 1.65), reflecting a nonsignificant safety signal for both drugs. Conclusion: AKI was identified in BP cancer clinical trials, although a safety signal for BPs and AKI within the FDA AERS was not detected. Our findings may be attributed, in part, to clinicians who believe that AKI occurs infrequently; ascribe the AKI to underlying premorbid disease, therapy, or cancer progression; or consider that AKI is a known adverse drug reaction of BPs and thus under-report AKI to the AERS. PMID:23814519
Differences in Antipsychotic-Related Adverse Events in Adult, Pediatric, and Geriatric Populations.
Sagreiya, Hersh; Chen, Yi-Ren; Kumarasamy, Narmadan A; Ponnusamy, Karthik; Chen, Doris; Das, Amar K
2017-02-26
In recent years, antipsychotic medications have increasingly been used in pediatric and geriatric populations, despite the fact that many of these drugs were approved based on clinical trials in adult patients only. Preliminary studies have shown that the "off-label" use of these drugs in pediatric and geriatric populations may result in adverse events not found in adults. In this study, we utilized the large-scale U.S. Food and Drug Administration (FDA) Adverse Events Reporting System (AERS) database to look at differences in adverse events from antipsychotics among adult, pediatric, and geriatric populations. We performed a systematic analysis of the FDA AERS database using MySQL by standardizing the database using structured terminologies and ontologies. We compared adverse event profiles of atypical versus typical antipsychotic medications among adult (18-65), pediatric (age < 18), and geriatric (> 65) populations. We found statistically significant differences between the number of adverse events in the pediatric versus adult populations with aripiprazole, clozapine, fluphenazine, haloperidol, olanzapine, quetiapine, risperidone, and thiothixene, and between the geriatric versus adult populations with aripiprazole, chlorpromazine, clozapine, fluphenazine, haloperidol, paliperidone, promazine, risperidone, thiothixene, and ziprasidone (p < 0.05, with adjustment for multiple comparisons). Furthermore, the particular types of adverse events reported also varied significantly between each population for aripiprazole, clozapine, haloperidol, olanzapine, quetiapine, risperidone, and ziprasidone (Chi-square, p < 10 -6 ). Diabetes was the most commonly reported side effect in the adult population, compared to behavioral problems in the pediatric population and neurologic symptoms in the geriatric population. We also found discrepancies between the frequencies of reports in AERS and in the literature. Our analysis of the FDA AERS database shows that there are significant differences in both the numbers and types of adverse events among these age groups and between atypical and typical antipsychotics. It is important for clinicians to be mindful of these differences when prescribing antipsychotics, especially when prescribing medications off-label.
Bellavere, F; Cacciatori, V; Bacchi, E; Gemma, M L; Raimondo, D; Negri, C; Thomaseth, K; Muggeo, M; Bonora, E; Moghetti, P
2018-03-01
Both aerobic (AER) and resistance (RES) training improve metabolic control in patients with type 2 diabetes (T2DM). However, information on the effects of these training modalities on cardiovascular autonomic control is limited. Our aim was to compare the effects of AER and RES training on cardiovascular autonomic function in these subjects. Cardiovascular autonomic control was assessed by Power Spectral Analysis (PSA) of Heart Rate Variability (HRV) and baroreceptors function indexes in 30 subjects with T2DM, randomly assigned to aerobic or resistance training for 4 months. In particular, PSA of HRV measured the Low Frequency (LF) and High Frequency (HF) bands of RR variations, expression of prevalent sympathetic and parasympathetic drive, respectively. Furthermore, we measured the correlation occurring between systolic blood pressure and heart rate during a standardized Valsalva maneuver using two indexes, b2 and b4, considered an expression of baroreceptor sensitivity and peripheral vasoactive adaptations during predominant sympathetic and parasympathetic drive, respectively. After training, the LF/HF ratio, which summarizes the sympatho-vagal balance in HRV control, was similarly decreased in the AER and RES groups. After AER, b2 and b4 significantly improved. After RES, changes of b2 were of borderline significance, whereas changes of b4 did not reach statistical significance. However, comparison of changes in baroreceptor sensitivity indexes between groups did not show statistically significant differences. Both aerobic and resistance training improve several indices of the autonomic control of the cardiovascular system in patients with T2DM. Although these improvements seem to occur to a similar extent in both training modalities, some differences cannot be ruled out. NCT01182948, clinicaltrials.gov. Copyright © 2017 The Italian Society of Diabetology, the Italian Society for the Study of Atherosclerosis, the Italian Society of Human Nutrition, and the Department of Clinical Medicine and Surgery, Federico II University. Published by Elsevier B.V. All rights reserved.
Gandhi, Pranav K; Gentry, William M; Bottorff, Michael B
2012-10-01
To investigate reports of thrombotic events associated with the use of C1 esterase inhibitor products in patients with hereditary angioedema in the United States. Retrospective data mining analysis. The United States Food and Drug Administration (FDA) adverse event reporting system (AERS) database. Case reports of C1 esterase inhibitor products, thrombotic events, and C1 esterase inhibitor product-associated thrombotic events (i.e., combination cases) were extracted from the AERS database, using the time frames of each respective product's FDA approval date through the second quarter of 2011. Bayesian statistical methodology within the neural network architecture was implemented to identify potential signals of a drug-associated adverse event. A potential signal is generated when the lower limit of the 95% 2-sided confidence interval of the information component, denoted by IC₀₂₅ , is greater than zero. This suggests that the particular drug-associated adverse event was reported to the database more often than statistically expected from reports available in the database. Ten combination cases of thrombotic events associated with the use of one C1 esterase inhibitor product (Cinryze) were identified in patients with hereditary angioedema. A potential signal demonstrated by an IC₀₂₅ value greater than zero (IC₀₂₅ = 2.91) was generated for these combination cases. The extracted cases from the AERS indicate continuing reports of thrombotic events associated with the use of one C1 esterase inhibitor product among patients with hereditary angioedema. The AERS is incapable of establishing a causal link and detecting the true frequency of an adverse event associated with a drug; however, potential signals of C1 esterase inhibitor product-associated thrombotic events among patients with hereditary angioedema were identified in the extracted combination cases. © 2012 Pharmacotherapy Publications, Inc.
A Continuum of Renin-Independent Aldosteronism in Normotension
Baudrand, Rene; Guarda, Francisco J.; Fardella, Carlos; Hundemer, Gregory; Brown, Jenifer; Williams, Gordon; Vaidya, Anand
2017-01-01
Primary aldosteronism (PA) is a severe form of autonomous aldosteronism. Milder forms of autonomous and renin-independent aldosteronism may be common, even in normotension. We characterized aldosterone secretion in 210 normotensives who had suppressed plasma renin activity (PRA<1.0 ng/mL/h), completed an oral sodium suppression test, received an infusion of angiotensin II (AngII), and had measurements of blood pressure (BP) and renal plasma flow (RPF). Continuous associations between urinary aldosterone excretion rate (AER), renin, and potassium handling were investigated. Severe autonomous aldosterone secretion that was consistent with confirmed PA was defined based on accepted criteria of an AER >12 mcg/24h with urinary sodium excretion >200 mmol/24h. Across the population, there were strong and significant associations between higher AER and higher urinary potassium excretion, higher AngII-stimulated aldosterone, and lower PRA, suggesting a continuum of renin-independent aldosteronism and mineralocorticoid receptor activity. Autonomous aldosterone secretion that fulfilled confirmatory criteria for PA was detected in 29 participants (14%). Normotensives with evidence suggestive of confirmed PA had higher 24h urinary AER (20.2±12.2 vs. 6.2±2.9 mcg/24h, P<0.001) as expected, but also higher AngII-stimulated aldosterone (12.4±8.6 vs. 6.6±4.3 ng/dL, P<0.001) and lower 24h urinary sodium-to-potassium excretion (2.69±0.65 vs. 3.69±1.50 mmol/mmol, P=0.001); however, there were no differences in age, aldosterone-to-renin ratio, BP, or RPF between the two groups. These findings indicate a continuum of renin-independent aldosteronism and mineralocorticoid receptor activity in normotension that ranges from subtle to overtly dysregulated and autonomous. Longitudinal studies are needed to determine whether this spectrum of autonomous aldosterone secretion contributes to hypertension and cardiovascular disease. PMID:28289182
Are Medications Involved in Vision and Intracrancial Pressure Changes Seen in Spaceflight?
NASA Technical Reports Server (NTRS)
Faust, K. M.; Wotring, V. E.
2014-01-01
The Food and Drug Association Adverse Event Reports (FDA AER) from 2009-2011 were used to create a database from millions of known and suspected medication-related adverse events among the general public. Vision changes, sometimes associated with intracranial pressure changes (VIIP), have been noted in some long duration crewmembers. Changes in vision and blood pressure (which can subsequently affect intracranial pressure) are fairly common side effects of medications. The purpose of this study was to explore the possibility of medication involvement in crew VIIP symptoms. Reports of suspected medication-related adverse events may be filed with the Food and Drug Administration (FDA) by medical professionals or consumers. Quarterly compilations of these reports are available for public download. Adverse Event Reporting System (AERS) reports from 1/1/2009- 6/30/2012 were downloaded and compiled into a searchable database for this study. Reports involving individuals under the age of 18 and older than 65 were excluded from this analysis. Case reports involving chronic diseases such as cancer, diabetes, multiple sclerosis and other serious conditions were also excluded. A scan of the medical literature for medication-related VIIP-like adverse events was used to create a list of suspect medications. These medications, as well as certain medications used frequently by ISS crew, were used to query the database. Queries for use of suspected medications were run, and the nature of the symptoms reported in those cases were tabulated. Symptoms searched in the FDA AERS were chosen to include the typical symptoms noted in crewmembers with VIIP. Vision symptoms searched were: visual acuity reduced, visual impairment, and vitreous floaters. Pressure changes included: abnormal sensation in eye, intracranial pressure increased, intraocular pressure increased, optic neuritis, optic neuropathy, and papilloedema. Limited demographic information is included with the FDA AERS; relevant data were also sorted by age and sex from each report. RESULTS Steroid-containing oral contraceptives had the highest number of reports associated with vision (n=166) and pressure symptoms (n=54). Corticosteroid-containing medications were also high; prednisone, for example, had 137 reports of vision issues and 79 of pressure issues. Pain relievers were also a medication class with vision and pressure-related adverse events reported. Common over-the-counter medications such as acetaminophen, aspirin and ibuprofen each had multiple reports for both vision and pressure symptoms. Antimicrobial medications ciprofloxacin and diflucan were also associated with a number of vision and pressure-related AERS. Unexpectedly, pseudoephedrine and promethazine were mentioned in fewer than 20 reports each over the 3.5 years of data examined. The FDA AERS represents a wealth of data, but there are several limitations to its use. The data are entered by the public or medical professionals, but are not checked for accuracy or completeness and may even be entered multiple times. The causal relationship between a particular adverse event and a particular medication is not tested. The cases represent a broad spectrum of demographics, occupations, and health histories, and thus do not model the astronaut population well. There is no information on the frequency of use of a medication for comparison purposes; it is not possible to assign a rate for any particular adverse event. Nonetheless, there are compelling trends. Use of corticosteroid-containing medications, pain relievers (even over-the-counter), and oral contraceptives were associated with higher numbers of vision- or intracranial pressure-related adverse events. In general, there were more vision problems than pressure problems reported. Certain medications that were once suspected of playing a role in the crew VIIP syndrome, namely pseudoephedrine and promethazine, were found to have extremely low numbers of VIIP-like AERS in the FDA data. However, crew use of corticosteroid-containing medications and pain relievers may warrant additional investigation
2015-09-15
middleware implementations via a common object-oriented software hierarchy, with library -specific implementations of the five GMTI benchmark ...Full-Chain Benchmarking for Open Architecture Airborne ISR Systems A Case Study for GMTI Radar Applications Matthias Beebe, Matthew Alexander...time performance, effective benchmarks are necessary to ensure that an ARP system can meet the mission constraints and performance requirements of
SMART- Small Motor AerRospace Technology
NASA Astrophysics Data System (ADS)
Balucani, M.; Crescenzi, R.; Ferrari, A.; Guarrea, G.; Pontetti, G.; Orsini, F.; Quattrino, L.; Viola, F.
2004-11-01
This paper presents the "SMART" (Small Motor AerRospace Tecnology) propulsion system, constituted of microthrusters array realised by semiconductor technology on silicon wafers. SMART system is obtained gluing three main modules: combustion chambers, igniters and nozzles. The module was then filled with propellant and closed by gluing a piece of silicon wafer in the back side of the combustion chambers. The complete assembled module composed of 25 micro- thrusters with a 3 x 5 nozzle is presented. The measurement showed a thrust of 129 mN and impulse of 56,8 mNs burning about 70mg of propellant for the micro-thruster with nozzle and a thrust of 21 mN and impulse of 8,4 mNs for the micro-thruster without nozzle.
Gallbladder Boundary Segmentation from Ultrasound Images Using Active Contour Model
NASA Astrophysics Data System (ADS)
Ciecholewski, Marcin
Extracting the shape of the gallbladder from an ultrasonography (US) image allows superfluous information which is immaterial in the diagnostic process to be eliminated. In this project an active contour model was used to extract the shape of the gallbladder, both for cases free of lesions, and for those showing specific disease units, namely: lithiasis, polyps and changes in the shape of the organ, such as folds or turns of the gallbladder. The approximate shape of the gallbladder was found by applying the motion equation model. The tests conducted have shown that for the 220 US images of the gallbladder, the area error rate (AER) amounted to 18.15%.
Social Studies: Grades 4, 8, & 11. Content Specifications for Statewide Assessment by Standard.
ERIC Educational Resources Information Center
Missouri State Dept. of Elementary and Secondary Education, Jefferson City.
This state of Missouri guide to content specifications for social studies assessment is designed to give teachers direction for assessment at the benchmark levels of grades 4, 8, and 11 for each standard that is appropriate for a statewide assessment. The guide includes specifications of what students are expected to know at the benchmark levels…
ERIC Educational Resources Information Center
Canadian Health Libraries Association.
Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…
The NAS kernel benchmark program
NASA Technical Reports Server (NTRS)
Bailey, D. H.; Barton, J. T.
1985-01-01
A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.
Transaction Processing Performance Council (TPC): State of the Council 2010
NASA Astrophysics Data System (ADS)
Nambiar, Raghunath; Wakou, Nicholas; Carman, Forrest; Majdalany, Michael
The Transaction Processing Performance Council (TPC) is a non-profit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable performance data to the industry. Established in August 1988, the TPC has been integral in shaping the landscape of modern transaction processing and database benchmarks over the past twenty-two years. This paper provides an overview of the TPC's existing benchmark standards and specifications, introduces two new TPC benchmarks under development, and examines the TPC's active involvement in the early creation of additional future benchmarks.
Donini, Lorenzo Maria
2015-01-01
In obese diabetic subjects, a correct life style, including diet and physical activity, is part of a correct intervention protocol. Thus, the aim of this study was to evaluate the effects of aerobic training intervention, based on heart rate at aerobic gas exchange threshold (AerTge), on clinical and physiological parameters in obese elderly subjects with type 2 diabetes (OT2DM). Thirty OT2DM subjects were randomly assigned to an intervention (IG) or control group (CG). The IG performed a supervised aerobic exercise training based on heart rate at AerTge whereas CG maintained their usual lifestyle. Anthropometric measures, blood analysis, peak oxygen consumption (V˙O2peak), metabolic equivalent (METpeak), work rate (WRpeak), and WRAerTge were assessed at baseline and after intervention. After training, patients enrolled in the IG had significantly higher (P < 0.001) V˙O2peak, METpeak, WRpeak, and WRAerTge and significantly lower (P < 0.005) weight, BMI, %FM, and waist circumference than before intervention. Both IG and CG subjects had lower glycated haemoglobin levels after intervention period. No significant differences were found for all the other parameters between pre- and posttraining and between groups. Aerobic exercise prescription based upon HR at AerTge could be a valuable physical intervention tool to improve the fitness level and metabolic equilibrium in OT2DM patients. PMID:26089890
Large-eddy simulation of a turbulent flow over the DrivAer fastback vehicle model
NASA Astrophysics Data System (ADS)
Ruettgers, Mario; Park, Junshin; You, Donghyun
2017-11-01
In 2012 the Technical University of Munich (TUM) made realistic generic car models called DrivAer available to the public. These detailed models allow a precise calculation of the flow around a lifelike car which was limited to simplified geometries in the past. In the present study, the turbulent flow around one of the models, the DrivAer Fastback model, is simulated using large-eddy simulation (LES). The goal of the study is to give a deeper physical understanding of highly turbulent regions around the car, like at the side mirror or at the rear end. For each region the contribution to the total drag is worked out. The results have shown that almost 35% of the drag is generated from the car wheels whereas the side mirror only contributes 4% of the total drag. Detailed frequency analysis on velocity signals in each wake region have also been conducted and found 3 dominant frequencies which correspond to the dominant frequency of the total drag. Furthermore, vortical structures are visualized and highly energetic points are identified. This work was supported by the National Research Foundation of Korea(NRF) Grant funded by the Korea government(Ministry of Science, ICT and Future Planning) (No. 2014R1A2A1A11049599, No. 2015R1A2A1A15056086, No. 2016R1E1A2A01939553).
Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan
2013-01-01
In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation. PMID:24264330
Campbell, Asharie J.; Watts, Kylie J.; Johnson, Mark S.; Taylor, Barry L.
2010-01-01
Summary The Aer receptor monitors internal energy (redox) levels in Escherichia coli with an FAD-containing PAS domain. Here, we randomly mutagenized the region encoding residues 14 to 119 of the PAS domain and found 72 aerotaxis-defective mutants, 24 of which were gain-of-function, signal-on mutants. The mutations were mapped onto an Aer homology model based on the structure of the PAS-FAD domain in NifL from Azotobacter vinlandii. Signal-on lesions clustered in the FAD binding pocket, the β-scaffolding and in the N-cap loop. We suggest that the signal-on lesions mimic the “signal-on” state of the PAS domain, and therefore may be markers for the signal-in and signal-out regions of this domain. We propose that the reduction of FAD rearranges the FAD binding pocket in a way that repositions the β-scaffolding and the N-cap loop. The resulting conformational changes are likely to be conveyed directly to the HAMP domain, and on to the kinase control module. In support of this hypothesis, we demonstrated disulfide band formation between cysteines substituted at residues N98C or I114C in the PAS β-scaffold and residue Q248C in the HAMP AS-2 helix. PMID:20545849
NASA Astrophysics Data System (ADS)
Hummel, John R.; Bergenthal, Jeff J.; Seng, William F.; Moulton, Joseph R., Jr.; Prager, S. D.
2004-08-01
The Joint Synthetic Battlespace for the Air Force (JSB-AF) is being developed to provide realistic representations of friendly and threat capabilities and the natural environmental conditions to support a variety of Department of Defense missions including training, mission rehearsal, decision support, acquisition, deployment, employment, operations, and the development of Courses of Action. This paper addresses three critical JSB issues associated with providing environ-mental representations to Modeling and Simulation (M&S) applications. First, how should the requirements for envi-ronmental functionality in a JSB-AF application be collected, analyzed, and used to define an Authoritative Environ-mental Representation (AER)? Second, how can JSB-AF AERs be generated? Third, once an AER has been generated, how should it be "served up" to the JSB-AF components? Our analyses of these issues will be presented from a general M&S perspective, with examples given from a JSB-AF centered view. In the context of this effort, the term "representa-tions" is meant to incorporate both basic environmental "data" (e.g., temperature, pressure, slope, elevation, etc.) and "effects", properties that can be derived from these data using physics-based models or empirical relationship from the fundamental data (e.g., extinction coefficients, radiance, soil moisture strength, etc.) We present a state-of-the-art review of the existing processes and technologies that address these questions.
Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan
2013-11-20
In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation.
Tao, Ye; Zhang, Yuan Ming
2012-05-01
Leaf hair points (LHPs) are important morphological structures in many desiccation-tolerant mosses, but study of their functions has been limited. A desert moss, Syntrichia caninervis, was chosen for examination of the ecological effects of LHPs on water retention and dew formation at individual and population (patch) levels. Although LHPs were only 4.77% of shoot weight, they were able to increase absolute water content (AWC) by 24.87%. The AWC of samples with LHPs was always greater than for those without LHPs during dehydration. The accumulative evaporation ratio (AER) showed an opposite trend. AWC, evaporation ratio and AER of shoots with LHPs took 20 min longer to reach a completely dehydrated state than shoots without LHPs. At the population level, dew formation on moss crusts with LHPs was faster than on crusts without LHPs, and the former had higher daily and total dew amounts. LHPs were able to improve dew amounts on crusts by 10.26%. Following three simulated rainfall events (1, 3 and 6 mm), AERs from crusts with LHPs were always lower than from crusts without LHPs. LHPs can therefore significantly delay and reduce evaporation. We confirm that LHPs are important desiccation-tolerant features of S. caninervis at both individual and population levels. LHPs greatly aid moss crusts in adapting to arid conditions.
MARC calculations for the second WIPP structural benchmark problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, H.S.
1981-05-01
This report describes calculations made with the MARC structural finite element code for the second WIPP structural benchmark problem. Specific aspects of problem implementation such as element choice, slip line modeling, creep law implementation, and thermal-mechanical coupling are discussed in detail. Also included are the computational results specified in the benchmark problem formulation.
Ferrari, D; Lichtler, A C; Pan, Z Z; Dealy, C N; Upholt, W B; Kosher, R A
1998-05-01
During early stages of chick limb development, the homeobox-containing gene Msx-2 is expressed in the mesoderm at the anterior margin of the limb bud and in a discrete group of mesodermal cells at the midproximal posterior margin. These domains of Msx-2 expression roughly demarcate the anterior and posterior boundaries of the progress zone, the highly proliferating posterior mesodermal cells underneath the apical ectodermal ridge (AER) that give rise to the skeletal elements of the limb and associated structures. Later in development as the AER loses its activity, Msx-2 expression expands into the distal mesoderm and subsequently into the interdigital mesenchyme which demarcates the developing digits. The domains of Msx-2 expression exhibit considerably less proliferation than the cells of the progress zone and also encompass several regions of programmed cell death including the anterior and posterior necrotic zones and interdigital mesenchyme. We have thus suggested that Msx-2 may be in a regulatory network that delimits the progress zone by suppressing the morphogenesis of the regions of the limb mesoderm in which it is highly expressed. In the present study we show that ectopic expression of Msx-2 via a retroviral expression vector in the posterior mesoderm of the progress zone from the time of initial formation of the limb bud severely impairs limb morphogenesis. Msx-2-infected limbs are typically very narrow along the anteroposterior axis, are occasionally truncated, and exhibit alterations in the pattern of formation of skeletal elements, indicating that as a consequence of ectopic Msx-2 expression the morphogenesis of large portions of the posterior mesoderm has been suppressed. We further show that Msx-2 impairs limb morphogenesis by reducing cell proliferation and promoting apoptosis in the regions of the posterior mesoderm in which it is ectopically expressed. The domains of ectopic Msx-2 expression in the posterior mesoderm also exhibit ectopic expression of BMP-4, a secreted signaling molecule that is coexpressed with Msx-2 during normal limb development in the anterior limb mesoderm, the posterior necrotic zone, and interdigital mesenchyme. This indicates that Msx-2 regulates BMP-4 expression and that the suppressive effects of Msx-2 on limb morphogenesis might be mediated in part by BMP-4. These studies indicate that during normal limb development Msx-2 is a key component of a regulatory network that delimits the boundaries of the progress zone by suppressing the morphogenesis of the regions of the limb mesoderm in which it is highly expressed, thus restricting the outgrowth and formation of skeletal elements and associated structures to the progress zone. We also report that rather large numbers of apoptotic cells as well as proliferating cells are present throughout the AER during all stages of normal limb development we have examined, indicating that many of the cells of the AER are continuously undergoing programmed cell death at the same time that new AER cells are being generated by cell proliferation. Thus, a balance between cell proliferation and programmed cell death may play a very important role in maintaining the activity of the AER. Copyright 1998 Academic Press.
A Comparative Study of Simulated and Measured Gear-Flap Flow Interaction
NASA Technical Reports Server (NTRS)
Khorrami, Mehdi R.; Mineck, Raymond E.; Yao, Chungsheng; Jenkins, Luther N.; Fares, Ehab
2015-01-01
The ability of two CFD solvers to accurately characterize the transient, complex, interacting flowfield asso-ciated with a realistic gear-flap configuration is assessed via comparison of simulated flow with experimental measurements. The simulated results, obtained with NASA's FUN3D and Exa's PowerFLOW® for a high-fidelity, 18% scale semi-span model of a Gulfstream aircraft in landing configuration (39 deg flap deflection, main landing gear on and off) are compared to two-dimensional and stereo particle image velocimetry measurements taken within the gear-flap flow interaction region during wind tunnel tests of the model. As part of the bench-marking process, direct comparisons of the mean and fluctuating velocity fields are presented in the form of planar contour plots and extracted line profiles at measurement planes in various orientations stationed in the main gear wake. The measurement planes in the vicinity of the flap side edge and downstream of the flap trailing edge are used to highlight the effects of gear presence on tip vortex development and the ability of the computational tools to accurately capture such effects. The present study indicates that both computed datasets contain enough detail to construct a relatively accurate depiction of gear-flap flow interaction. Such a finding increases confidence in using the simulated volumetric flow solutions to examine the behavior of pertinent aer-odynamic mechanisms within the gear-flap interaction zone.
Fung, Chunkit; Fossa, Sophie D.; Milano, Michael T.; Sahasrabudhe, Deepak M.; Peterson, Derick R.; Travis, Lois B.
2015-01-01
Purpose Increased risks of incident cardiovascular disease (CVD) in patients with testicular cancer (TC) given chemotherapy in European studies were largely restricted to long-term survivors and included patients from the 1960s. Few population-based investigations have quantified CVD mortality during, shortly after, and for two decades after TC diagnosis in the era of cisplatin-based chemotherapy. Patients and Methods Standardized mortality ratios (SMRs) for CVD and absolute excess risks (AERs; number of excess deaths per 10,000 person-years) were calculated for 15,006 patients with testicular nonseminoma reported to the population-based Surveillance, Epidemiology, and End Results program (1980 to 2010) who initially received chemotherapy (n = 6,909) or surgery (n = 8,097) without radiotherapy and accrued 60,065 and 81,227 person-years of follow-up, respectively. Multivariable modeling evaluated effects of age, treatment, extent of disease, and other factors on CVD mortality. Results Significantly increased CVD mortality occurred after chemotherapy (SMR, 1.36; 95% CI, 1.03 to 1.78; n = 54) but not surgery (SMR, 0.81; 95% CI, 0.60 to 1.07; n = 50). Significant excess deaths after chemotherapy were restricted to the first year after TC diagnosis (SMR, 5.31; AER, 13.90; n = 11) and included cerebrovascular disease (SMR, 21.72; AER, 7.43; n = 5) and heart disease (SMR, 3.45; AER, 6.64; n = 6). In multivariable analyses, increased CVD mortality after chemotherapy was confined to the first year after TC diagnosis (hazard ratio, 4.86; 95% CI, 1.25 to 32.08); distant disease (P < .05) and older age at diagnosis (P < .01) were independent risk factors. Conclusion This is the first population-based study, to our knowledge, to quantify short- and long-term CVD mortality after TC diagnosis. The increased short-term risk of CVD deaths should be further explored in analytic studies that enumerate incident events and can serve to develop comprehensive evidence-based approaches for risk stratification and application of preventive and interventional efforts. PMID:26240226
McLawhorn, Melinda W; Goulding, Margie R; Gill, Rajdeep K; Michele, Theresa M
2013-01-01
To augment the December 2010 United States Food and Drug Administration (FDA) Drug Safety Communication on accidental ingestion of benzonatate in children less than 10 years old by summarizing data on emergency department visits, benzonatate exposure, and reports of benzonatate overdoses from several data sources. Retrospective review of adverse-event reports and drug utilization data of benzonatate. The FDA Adverse Event Reporting System (AERS) database (1969-2010), the National Electronic Injury Surveillance System-Cooperative Adverse Drug Event Surveillance Project (NEISS-CADES, 2004-2009), and the IMS commercial data vendor (2004-2009). Any patient who reported an adverse event with benzonatate captured in the AERS or NEISS-CADES database or received a prescription for benzonatate according to the IMS commercial data vendor. Postmarketing adverse events with benzonatate were collected from the AERS database, emergency department visits due to adverse events with benzonatate were collected from the NEISS-CADES database, and outpatient drug utilization data were collected from the IMS commercial data vendor. Of 31 overdose cases involving benzonatate reported in the AERS database, 20 had a fatal outcome, and five of these fatalities occurred from accidental ingestions in children 2 years of age and younger. The NEISS-CADES database captured emergency department visits involving 12 cases of overdose from accidental benzonatate ingestions in children aged 1-3 years. Signs and symptoms of overdose included seizures, cardiac arrest, coma, brain edema or anoxic encephalopathy, apnea, tachycardia, and respiratory arrest and occurred in some patients within 15 minutes of ingestion. Dispensed benzonatate prescriptions increased by approximately 52% from 2004 to 2009. Although benzonatate has a long history of safe use, accumulating cases of fatal overdose, especially in children, prompted the FDA to notify health care professionals about the risks of benzonatate overdose. Pharmacists may have a role in preventing benzonatate overdoses by counseling patients on signs and symptoms of benzonatate overdose, the need for immediate medical care, and safe storage and disposal of benzonatate. © 2013 Pharmacotherapy Publications, Inc.
Evaluation of Fatigue Crack Growth and Fracture Properties of Cryogenic Model Materials
NASA Technical Reports Server (NTRS)
Newman, John A.; Forth, Scott C.; Everett, Richard A., Jr.; Newman, James C., Jr.; Kimmel, William M.
2002-01-01
The criteria used to prevent failure of wind-tunnel models and support hardware were revised as part of a project to enhance the capabilities of cryogenic wind tunnel testing at NASA Langley Research Center. Specifically, damage-tolerance fatigue life prediction methods are now required for critical components, and material selection criteria are more general and based on laboratory test data. The suitability of two candidate model alloys (AerMet 100 and C-250 steel) was investigated by obtaining the fatigue crack growth and fracture data required for a damage-tolerance fatigue life analysis. Finally, an example is presented to illustrate the newly implemented damage tolerance analyses required of wind-tunnel model system components.
Algorithm and Architecture Independent Benchmarking with SEAK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.
2016-05-23
Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, andmore » weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less
A Field-Based Aquatic Life Benchmark for Conductivity in ...
EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate genera and salinity (measured as conductivity) and from that relationship derives a freshwater aquatic life benchmark. This benchmark of 300 µS/cm may be applied to waters in Appalachian streams that are dominated by calcium and magnesium salts of sulfate and bicarbonate at circum-neutral to mildly alkaline pH. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.
ERIC Educational Resources Information Center
Moskovkin, Vladimir M.; Bocharova, Emilia A.; Balashova, Oksana V.
2014-01-01
Purpose: The purpose of this paper is to introduce and develop the methodology of journal benchmarking. Design/Methodology/ Approach: The journal benchmarking method is understood to be an analytic procedure of continuous monitoring and comparing of the advance of specific journal(s) against that of competing journals in the same subject area,…
ICSBEP Benchmarks For Nuclear Data Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briggs, J. Blair
2005-05-24
The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) -- Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive andmore » internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled ''International Handbook of Evaluated Criticality Safety Benchmark Experiments.'' The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.« less
2013-01-01
Background Vitamin D receptor activators reduce albuminuria, and may improve survival in chronic kidney disease (CKD). Animal studies suggest that these pleiotropic effects of vitamin D may be mediated by suppression of renin. However, randomized trials in humans have yet to establish this relationship. Methods In a randomized, placebo-controlled, double-blinded crossover study, the effect of oral paricalcitol (2 μg/day) was investigated in 26 patients with non-diabetic, albuminuric stage III-IV CKD. After treatment, plasma concentrations of renin (PRC), angiotensin II (AngII) and aldosterone (Aldo) were measured. GFR was determined by 51Cr-EDTA clearance. Assessment of renal NO dependency was performed by infusion of NG-monomethyl-L-arginine (L-NMMA). Albumin excretion rate (AER) was analyzed in 24-h urine and during 51Cr-EDTA clearance. Results Paricalcitol did not alter plasma levels of renin, AngII, Aldo, or urinary excretion of sodium and potassium. A modest reduction of borderline significance was observed in AER, and paricalcitol abrogated the albuminuric response to L-NMMA. Conclusions In this randomized, placebo-controlled trial paricalcitol only marginally decreased AER and did not alter circulating levels of renin, AngII or Aldo. The abrogation of the rise in albumin excretion by paricalcitol during NOS blockade may indicate that favourable modulation of renal NO dependency could be involved in mediating reno-protection and survival benefits in CKD. Trial registration ClinicalTrials.gov identifier: NCT01136564 PMID:23889806
Rahman, Sajjad; Salameh, Khalil; Al-Rifai, Hilal; Masoud, Ahmed; Lutfi, Samawal; Salama, Husam; Abdoh, Ghassan; Omar, Fahmi; Bener, Abdulbari
2011-09-01
To analyze and compare the current gestational age specific neonatal survival rates between Qatar and international benchmarks. An analytical comparative study. Women's Hospital, Hamad Medical Corporation, Doha, Qatar, from 2003-2008. Six year's (2003-2008) gestational age specific neonatal mortality data was stratified for each completed week of gestation at birth from 24 weeks till term. The data from World Health Statistics by WHO (2010), Vermont Oxford Network (VON, 2007) and National Statistics United Kingdom (2006) were used as international benchmarks for comparative analysis. A total of 82,002 babies were born during the study period. Qatar's neonatal mortality rate (NMR) dropped from 6/1000 in 2003 to 4.3/1000 in 2008 (p < 0.05). The overall and gestational age specific neonatal mortality rates of Qatar were comparable with international benchmarks. The survival of < 27 weeks and term babies was better in Qatar (p=0.01 and p < 0.001 respectively) as compared to VON. The survival of > 32 weeks babies was better in UK (p=0.01) as compared to Qatar. The relative risk (RR) of death decreased with increasing gestational age (p < 0.0001). Preterm babies (45%) followed by lethal chromosomal and congenital anomalies (26.5%) were the two leading causes of neonatal deaths in Qatar. The current total and gestational age specific neonatal survival rates in the State of Qatar are comparable with international benchmarks. In Qatar, persistently high rates of low birth weight and lethal chromosomal and congenital anomalies significantly contribute towards neonatal mortality.
Benchmarking specialty hospitals, a scoping review on theory and practice.
Wind, A; van Harten, W H
2017-04-04
Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.
Benchmarking Using Basic DBMS Operations
NASA Astrophysics Data System (ADS)
Crolotte, Alain; Ghazal, Ahmad
The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.
Xia, Yuan; Deshpande, Sameer; Bonates, Tiberius
2016-11-01
Social marketing managers promote desired behaviors to an audience by making them tangible in the form of environmental opportunities to enhance benefits and reduce barriers. This study proposed "benchmarks," modified from those found in the past literature, that would match important concepts of the social marketing framework and the inclusion of which would ensure behavior change effectiveness. In addition, we analyzed behavior change interventions on a "social marketing continuum" to assess whether the number of benchmarks and the role of specific benchmarks influence the effectiveness of physical activity promotion efforts. A systematic review of social marketing interventions available in academic studies published between 1997 and 2013 revealed 173 conditions in 92 interventions. Findings based on χ 2 , Mallows' Cp, and Logical Analysis of Data tests revealed that the presence of more benchmarks in interventions increased the likelihood of success in promoting physical activity. The presence of more than 3 benchmarks improved the success of the interventions; specifically, all interventions were successful when more than 7.5 benchmarks were present. Further, primary formative research, core product, actual product, augmented product, promotion, and behavioral competition all had a significant influence on the effectiveness of interventions. Social marketing is an effective approach in promoting physical activity among adults when a substantial number of benchmarks are used and when managers understand the audience, make the desired behavior tangible, and promote the desired behavior persuasively.
Length of stay benchmarks for inpatient rehabilitation after stroke.
Meyer, Matthew; Britt, Eileen; McHale, Heather A; Teasell, Robert
2012-01-01
In Canada, no standardized benchmarks for length of stay (LOS) have been established for post-stroke inpatient rehabilitation. This paper describes the development of a severity specific median length of stay benchmarking strategy, assessment of its impact after one year of implementation in a Canadian rehabilitation hospital, and establishment of updated benchmarks that may be useful for comparison with other facilities across Canada. Patient data were retrospectively assessed for all patients admitted to a single post-acute stroke rehabilitation unit in Ontario, Canada between April 2005 and March 2008. Rehabilitation Patient Groups (RPGs) were used to establish stratified median length of stay benchmarks for each group that were incorporated into team rounds beginning in October 2009. Benchmark impact was assessed using mean LOS, FIM(®) gain, and discharge destination for each RPG group, collected prospectively for one year, compared against similar information from the previous calendar year. Benchmarks were then adjusted accordingly for future use. Between October 2009 and September 2010, a significant reduction in average LOS was noted compared to the previous year (35.3 vs. 41.2 days; p < 0.05). Reductions in LOS were noted in each RPG group including statistically significant reductions in 4 of the 7 groups. As intended, reductions in LOS were achieved with no significant reduction in mean FIM(®) gain or proportion of patients discharged home compared to the previous year. Adjusted benchmarks for LOS ranged from 13 to 48 days depending on the RPG group. After a single year of implementation, severity specific benchmarks helped the rehabilitation team reduce LOS while maintaining the same levels of functional gain and achieving the same rate of discharge to the community. © 2012 Informa UK, Ltd.
Börnhorst, C; Bel-Serrat, S; Pigeot, I; Huybrechts, I; Ottavaere, C; Sioen, I; De Henauw, S; Mouratidou, T; Mesana, M I; Westerterp, K; Bammann, K; Lissner, L; Eiben, G; Pala, V; Rayson, M; Krogh, V; Moreno, L A
2014-02-01
Little is known about the validity of repeated 24-h dietary recalls (24-HDR) as a measure of total energy intake (EI) in young children. This study aimed to evaluate the validity of proxy-reported EI by comparison with total energy expenditure (TEE) measured by the doubly labeled water (DLW) technique. The agreement between EI and TEE was investigated in 36 (47.2% boys) children aged 4-10 years from Belgium and Spain using subgroup analyses and Bland-Altman plots. Low-energy-reporters (LER), adequate-energy-reporters (AER) and high-energy-reporters (HER) were defined from the ratio of EI over TEE by application of age- and sex-specific cut-off values. There was good agreement between means of EI (1500 kcal/day) and TEE (1523 kcal/day) at group level though in single children, i.e. at the individual level, large differences were observed. Almost perfect agreement between EI and TEE was observed in thin/normal weight children (EI: 1511 kcal/day; TEE: 1513 kcal/day). Even in overweight/obese children the mean difference between EI and TEE was only -86 kcal/day. Among the participants, 28 (78%) were classified as AER, five (14%) as HER and three (8%) as LER. Two proxy-reported 24-HDRs were found to be a valid instrument to assess EI on group level but not on the individual level. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Neocolonialism and Health Care Access among Marshall Islanders in the United States.
Duke, Michael R
2017-09-01
In the Marshall Islands, a history of extensive nuclear weapons testing and covert biomedical research, coupled with the U.S.'s ongoing military presence in the country, has severely compromised the health of the local population. Despite the U.S.'s culpability in producing ill health along with high rates of emigration from the islands to the mainland United States, the large portion of Marshallese who reside in the United States face substantial barriers to accessing health care. Drawing from ongoing field research with a Marshallese community in Arkansas, this article explores the multifaceted impediments that U.S.-based Marshall Islanders face in receiving medical treatment. Calling on an expansive and inclusive notion of neocolonialism, I argue that Marshallese structural vulnerability with regard to health and health care treatment derives from their status as neocolonial subjects and from their limited claims to health-related deservingness associated with this status. [Marshall Islanders, health care access, neocolonialism, radiation exposure, immigrant health] L̗ōmn̗ak ko rōttin̗o: Ilo M̗ajel̗, juon bwebwenato kōn kōmmālmel im nuclear baam̗ ko im ekkatak ko rōttin̗o̗ kōn wāwein an baijin ko jelōt armej, barāinwōt an to an ri tarinae ro an Amedka pād ilo aelōn̄ kein, em̗ōj an jelōt ājmour an armej ro ilo aelōn̄ kein. Men̄e alikkar bwe Amedka in ear jino nan̄inmej kein im ej un eo armej rein rej em̗m̗akūt jān āne kein āne er n̄an ioon Amedka, elōn̄ iaan ri M̗ajel̗ rein rej jelm̗ae elōn̄ apan̄ ko n̄an aer del̗o̗n̄e jikin ājmour ko. Jān ekkatak eo ej bōk jikin kiō, jerbal in ej etali kabōjrak rak kein rōlōn̄ im armej in M̗ajel̗ ro ioon Amedka in rej jelm̗ae ilo aer jibadōk lo̗k jikin taktō. Ilo an kar Amedka jibadōk juon jea eo eutiej imejān lal̗ in, ij kwal̗ok juon aō akweelel bwe apan̄ ko an armej in M̗ajel̗ ikijjeen ājmour im jikin taktō ej itok jān aer kar ri kōm̗akoko ilo an kar Amedka lelōn̄ l̗o̗k etan ilo mejān lal̗ im jān aer jab pukot jipan kein ej aer bwe kōn jōkjōk in. © 2017 by the American Anthropological Association.
PMLB: a large benchmark suite for machine learning evaluation and comparison.
Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H
2017-01-01
The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.
Hierarchical Address Event Routing for Reconfigurable Large-Scale Neuromorphic Systems.
Park, Jongkil; Yu, Theodore; Joshi, Siddharth; Maier, Christoph; Cauwenberghs, Gert
2017-10-01
We present a hierarchical address-event routing (HiAER) architecture for scalable communication of neural and synaptic spike events between neuromorphic processors, implemented with five Xilinx Spartan-6 field-programmable gate arrays and four custom analog neuromophic integrated circuits serving 262k neurons and 262M synapses. The architecture extends the single-bus address-event representation protocol to a hierarchy of multiple nested buses, routing events across increasing scales of spatial distance. The HiAER protocol provides individually programmable axonal delay in addition to strength for each synapse, lending itself toward biologically plausible neural network architectures, and scales across a range of hierarchies suitable for multichip and multiboard systems in reconfigurable large-scale neuromorphic systems. We show approximately linear scaling of net global synaptic event throughput with number of routing nodes in the network, at 3.6×10 7 synaptic events per second per 16k-neuron node in the hierarchy.
NASA Technical Reports Server (NTRS)
Ko, Malcolm K. W.; Weisenstein, Debra K.; Sze, Nein Dak; Rodriguez, Jose M.; Heisey, Curtis
1991-01-01
The AER two-dimensional chemistry-transport model is used to study the effect on stratospheric ozone (O3) from operations of supersonic and subsonic aircraft. The study is based on six emission scenarios provided to AER. The study showed that: (1) the O3 response is dominated by the portion of the emitted nitrogen compounds that is entrained in the stratosphere; (2) the entrainment is a sensitive function of the altitude at which the material is injected; (3) the O3 removal efficiency of the emitted material depends on the concentrations of trace gases in the background atmosphere; and (4) evaluation of the impact of fleet operations in the future atmosphere must take into account the expected changes in trace gas concentrations from other activities. Areas for model improvements in future studies are also discussed.
Coupling Processes Between Atmospheric Chemistry and Climate
NASA Technical Reports Server (NTRS)
Ko, Malcolm K. W.; Weisenstein, Debra; Rodriguez, Jose; Danilin, Michael; Scott, Courtney; Shia, Run-Lie; Eluszkiewicz, Junusz; Sze, Nien-Dak
1999-01-01
This is the final report. The overall objective of this project is to improve the understanding of coupling processes among atmospheric chemistry, aerosol and climate, all important for quantitative assessments of global change. Among our priority are changes in ozone and stratospheric sulfate aerosol, with emphasis on how ozone in the lower stratosphere would respond to natural or anthropogenic changes. The work emphasizes two important aspects: (1) AER's continued participation in preparation of, and providing scientific input for, various scientific reports connected with assessment of stratospheric ozone and climate. These include participation in various model intercomparison exercises as well as preparation of national and international reports. and (2) Continued development of the AER three-wave interactive model to address how the transport circulation will change as ozone and the thermal properties of the atmosphere change, and assess how these new findings will affect our confidence in the ozone assessment results.
An experimental investigation of masking in the US FDA adverse event reporting system database.
Wang, Hsin-wei; Hochberg, Alan M; Pearson, Ronald K; Hauben, Manfred
2010-12-01
A phenomenon of 'masking' or 'cloaking' in pharmacovigilance data mining has been described, which can potentially cause signals of disproportionate reporting (SDRs) to be missed, particularly in pharmaceutical company databases. Masking has been predicted theoretically, observed anecdotally or studied to a limited extent in both pharmaceutical company and health authority databases, but no previous publication systematically assesses its occurrence in a large health authority database. To explore the nature, extent and possible consequences of masking in the US FDA Adverse Event Reporting System (AERS) database by applying various experimental unmasking protocols to a set of drugs and events representing realistic pharmacovigilance analysis conditions. This study employed AERS data from 2001 through 2005. For a set of 63 Medical Dictionary for Regulatory Activities (MedDRA®) Preferred Terms (PTs), disproportionality analysis was carried out with respect to all drugs included in the AERS database, using a previously described urn-model-based algorithm. We specifically sought masking in which drug removal induced an increase in the statistical representation of a drug-event combination (DEC) that resulted in the emergence of a new SDR. We performed a series of unmasking experiments selecting drugs for removal using rational statistical decision rules based on the requirement of a reporting ratio (RR) >1, top-ranked statistical unexpectedness (SU) and relatedness as reflected in the WHO Anatomical Therapeutic Chemical level 4 (ATC4) grouping. In order to assess the possible extent of residual masking we performed two supplemental purely empirical analyses on a limited subset of data. This entailed testing every drug and drug group to determine which was most influential in uncovering masked SDRs. We assessed the strength of external evidence for a causal association for a small number of masked SDRs involving a subset of 29 drugs for which level of evidence adjudication was available from a previous study. The original disproportionality analysis identified 8719 SDRs for the 63 PTs. The SU-based unmasking protocols generated variable numbers of masked SDRs ranging from 38 to 156, representing a 0.43-1.8% increase over the number of baseline SDRs. A significant number of baseline SDRs were also lost in the course of our experiments. The trend in the number of gained SDRs per report removed was inversely related to the number of lost SDRs per protocol. Both the number and nature of the reports removed influenced the number of gained SDRs observed. The purely empirical protocols unmasked up to ten times as many SDRs. None of the masked SDRs had strong external evidence supporting a causal association. Most involved associations for which there was no external supporting evidence or were in the original product label. For two masked SDRs, there was external evidence of a possible causal association. We documented masking in the FDA AERS database. Attempts at unmasking SDRs using practically implementable protocols produced only small changes in the output of SDRs in our analysis. This is undoubtedly related to the large size and diversity of the database, but the complex interdependencies between drugs and events in authentic spontaneous reporting system (SRS) databases, and the impact of measures of statistical variability that are typically used in real-world disproportionality analysis, may be additional factors that constrain the discovery of masked SDRs and which may also operate in pharmaceutical company databases. Empirical determination of the most influential drugs may uncover significantly more SDRs than protocols based on predetermined statistical selection rules but are impractical except possibly for evaluating specific events. Routine global exercises to elicit masking, especially in large health authority databases are not justified based on results available to date. Exercises to elicit unmasking should be driven by prior knowledge or obvious data imbalances.
SP2Bench: A SPARQL Performance Benchmark
NASA Astrophysics Data System (ADS)
Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg
A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.
Benchmarking Discount Rate in Natural Resource Damage Assessment with Risk Aversion.
Wu, Desheng; Chen, Shuzhen
2017-08-01
Benchmarking a credible discount rate is of crucial importance in natural resource damage assessment (NRDA) and restoration evaluation. This article integrates a holistic framework of NRDA with prevailing low discount rate theory, and proposes a discount rate benchmarking decision support system based on service-specific risk aversion. The proposed approach has the flexibility of choosing appropriate discount rates for gauging long-term services, as opposed to decisions based simply on duration. It improves injury identification in NRDA since potential damages and side-effects to ecosystem services are revealed within the service-specific framework. A real embankment case study demonstrates valid implementation of the method. © 2017 Society for Risk Analysis.
Monitoring of endoscope reprocessing with an adenosine triphosphate (ATP) bioluminescence method.
Parohl, Nina; Stiefenhöfer, Doris; Heiligtag, Sabine; Reuter, Henning; Dopadlik, Dana; Mosel, Frank; Gerken, Guido; Dechêne, Alexander; Heintschel von Heinegg, Evelyn; Jochum, Christoph; Buer, Jan; Popp, Walter
2017-01-01
Background: The arising challenges over endoscope reprocessing quality proposes to look for possibilities to measure and control the process of endoscope reprocessing. Aim: The goal of this study was to evaluate the feasibility of monitoring endoscope reprocessing with an adenosine triphosphate (ATP) based bioluminescence system. Methods: 60 samples of eight gastroscopes have been assessed from routine clinical use in a major university hospital in Germany. Endoscopes have been assessed with an ATP system and microbial cultures at different timepoints during the reprocessing. Findings: After the bedside flush the mean ATP level in relative light units (RLU) was 19,437 RLU, after the manual cleaning 667 RLU and after the automated endoscope reprocessor (AER) 227 RLU. After the manual cleaning the mean total viable count (TVC) per endoscope was 15.3 CFU/10 ml, and after the AER 5.7 CFU/10 ml. Our results show that there are reprocessing cycles which are not able to clean a patient used endoscope. Conclusion: Our data suggest that monitoring of flexible endoscope with ATP can identify a number of different influence factors, like the endoscope condition and the endoscopic procedure, or especially the quality of the bedside flush and manual cleaning before the AER. More process control is one option to identify and improve influence factors to finally increase the overall reprocessing quality, best of all by different methods. ATP measurement seems to be a valid technique that allows an immediate repeat of the manual cleaning if the ATP results after manual cleaning exceed the established cutoff of 200 RLU.
[Virulence markers of Escherichia coli O1 strains].
Makarova, M A; Kaftyreva, L A; Grigor'eva, N S; Kicha, E V; Lipatova, L A
2011-01-01
To detect virulence genes in clinical isolates of Escherichia coli O1 using polymerase chain reaction (PCR). One hundred and twenty strains of E.coli O1 strains isolated from faeces of patients with acute diarrhea (n = 45) and healthy persons (n = 75) were studied. PCR with primers for rfb and fliC genes, which control synthesis of O- and H- antigens respectively, was used. Fourteen virulence genes (pap, aaf, sfa, afa, eaeA, bfpA, ial, hly, cnf, stx1, stx2, lt, st, and aer) were detected by PCR primers. K1-antigen was determined by Pastorex Meningo B/E. coli O1 kit (Bio-Rad). rfb gene controlling O-antigen synthesis in serogroup O1 as well as fliC gene controlling synthesis of H7 and K1 antigens were detected in all strains. Thus all E. coli strains had antigenic structure O1:K1 :H-:F7. Virulence genes aafl, sfa, afa, eaeA, bfpA, ial, hly, cnf, stx1, stx2, lt, and st were not detected. All strains owned pap and aer genes regardless of the presence of acute diarrhea symptoms. It was shown that E. coli O1:KI:H-:F7 strains do not have virulence genes which are characteristic for diarrhea-causing Escherichia. In accordance with the presence of pap and aer genes they could be attributed to uropathogenic Escherichia (UPEC) or avian-pathogenic Escherichia (APEC). It is necessary to detect virulence factors in order to determine E. coli as a cause of intestinal infection.
NAS Grid Benchmarks: A Tool for Grid Space Exploration
NASA Technical Reports Server (NTRS)
Frumkin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)
2001-01-01
We present an approach for benchmarking services provided by computational Grids. It is based on the NAS Parallel Benchmarks (NPB) and is called NAS Grid Benchmark (NGB) in this paper. We present NGB as a data flow graph encapsulating an instance of an NPB code in each graph node, which communicates with other nodes by sending/receiving initialization data. These nodes may be mapped to the same or different Grid machines. Like NPB, NGB will specify several different classes (problem sizes). NGB also specifies the generic Grid services sufficient for running the bench-mark. The implementor has the freedom to choose any specific Grid environment. However, we describe a reference implementation in Java, and present some scenarios for using NGB.
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.
Cloud Forming Potential of Aerosol from Light-duty Gasoline Direct Injection Vehicles
DOT National Transportation Integrated Search
2017-12-01
In this study, we evaluate the hygroscopicity and droplet kinetics of fresh and aged emissions from new generation gasoline direct injector engines retrofitted with a gasoline particulate filter (GPF). Furthermore, ageing and subsequent secondary aer...
U.S. EPA'S ACUTE REFERENCE EXPOSURE METHODOLOGY FOR ACUTE INHALATION EXPOSURES
The US EPA National Center for Environmental Assessment has developed a methodology to derive acute inhalation toxicity benchmarks, called acute reference exposures (AREs), for noncancer effects. The methodology provides guidance for the derivation of chemical-specific benchmark...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karl Anderson, Steve Plimpton
2015-01-27
The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created inmore » the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.« less
Reducing accounts receivable through benchmarking and best practices identification.
Berkey, T
1998-01-01
As HIM professionals look for ways to become more competitive and achieve the best results, the importance of discovering best practices becomes more apparent. Here's how one team used a benchmarking project to provide specific best practices that reduced accounts receivable days.
APPLICATION OF BENCHMARK DOSE METHODOLOGY TO DATA FROM PRENATAL DEVELOPMENTAL TOXICITY STUDIES
The benchmark dose (BMD) concept was applied to 246 conventional developmental toxicity datasets from government, industry and commercial laboratories. Five modeling approaches were used, two generic and three specific to developmental toxicity (DT models). BMDs for both quantal ...
Estimating and Valuing Morbidity in a Policy Context: Proceedings of June 1989 AERE Workshop (1989)
Contains the proceedings for the 1989 Association of Environmental and Resource Economists Workshop on valuing reductions in human health morbidity risks. Series of papers and discussions were collected and reported in the document.
New NAS Parallel Benchmarks Results
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)
1997-01-01
NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.
Benchmark matrix and guide: Part II.
1991-01-01
In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.
The Suite for Embedded Applications and Kernels
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-05-10
Many applications of high performance embedded computing are limited by performance or power bottlenecks. We havedesigned SEAK, a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions to these bottlenecks? and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) andgoal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user blackbox evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informativemore » for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less
Virulence Genes and Antibiotic Susceptibilities of Uropathogenic E. coli Strains.
Uzun, Cengiz; Oncül, Oral; Gümüş, Defne; Alan, Servet; Dayioğlu, Nurten; Küçüker, Mine Anğ
2015-01-01
The aim of this study is to detect the presence of and possible relation between virulence genes and antibiotic resistance in E. coli strains isolated from patients with acute, uncomplicated urinary tract infections (UTI). 62 E. coli strains isolated from patients with acute, uncomplicated urinary tract infections (50 strains isolated from acute uncomplicated cystitis cases (AUC); 12 strains from acute uncomplicated pyelonephritis cases (AUP)) were screened for virulence genes [pap (pyelonephritis-associated pili), sfa/foc (S and F1C fimbriae), afa (afimbrial adhesins), hly (hemolysin), cnf1 (cytotoxic necrotizing factor), aer (aerobactin), PAI (pathogenicity island marker), iroN (catecholate siderophore receptor), ompT (outer membrane protein T), usp (uropathogenic specific protein)] by PCR and for antimicrobial resistance by disk diffusion method according to CLSI criteria. It was found that 56 strains (90.3%) carried at least one virulence gene. The most common virulence genes were ompT (79%), aer (51.6%), PAI (51.6%) and usp (56.5%). 60% of the strains were resistant to at least one antibiotic. The highest resistance rates were against ampicillin (79%) and co-trimoxazole (41.9%). Fifty percent of the E. coli strains (31 strains) were found to be multiple resistant. Eight (12.9%) out of 62 strains were found to be ESBL positive. Statistically significant relationships were found between the absence of usp and AMP - SXT resistance, iroN and OFX - CIP resistance, PAI and SXT resistance, cnf1 and AMP resistance, and a significant relationship was also found between the presence of the afa and OFX resistance. No difference between E. coli strains isolated from two different clinical presentations was found in terms of virulence genes and antibiotic susceptibility.
& Speeches USDA Newsroom Videos Pet Travel Blog Z6_LO4C1BS0LO4EB0AER7MEEI2G47 Error Error Biosecurity ESF11 Farm Bill Horse Protection Hungry Pests Pet Travel Trade Veterinary Accreditation USDA.gov
Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects
Lovaglio, Pietro Giorgio
2012-01-01
Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140
Benchmarking strategies for measuring the quality of healthcare: problems and prospects.
Lovaglio, Pietro Giorgio
2012-01-01
Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.
Benchmark Problems for Spacecraft Formation Flying Missions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.
2003-01-01
To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.
A Field-Based Aquatic Life Benchmark for Conductivity in ...
This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for dissolved salts as measured by conductivity in Central Appalachian streams using data from West Virginia and Kentucky. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.
Review of the GMD Benchmark Event in TPL-007-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backhaus, Scott N.; Rivera, Michael Kelly
2015-07-21
Los Alamos National Laboratory (LANL) examined the approaches suggested in NERC Standard TPL-007-1 for defining the geo-electric field for the Benchmark Geomagnetic Disturbance (GMD) Event. Specifically; 1. Estimating 100-year exceedance geo-electric field magnitude; The scaling of the GMD Benchmark Event to geomagnetic latitudes below 60 degrees north; and 3. The effect of uncertainties in earth conductivity data on the conversion from geomagnetic field to geo-electric field. This document summarizes the review and presents recommendations for consideration
NASA Astrophysics Data System (ADS)
Waring, Michael S.
2016-11-01
Terpene ozonolysis reactions can be a strong source of secondary organic aerosol (SOA) indoors. SOA formation can be parameterized and predicted using the aerosol mass fraction (AMF), also known as the SOA yield, which quantifies the mass ratio of generated SOA to oxidized terpene. Limonene is a monoterpene that is at sufficient concentrations such that it reacts meaningfully with ozone indoors. It has two unsaturated bonds, and the magnitude of the limonene ozonolysis AMF varies by a factor of ∼4 depending on whether one or both of its unsaturated bonds are ozonated, which depends on whether ozone is in excess compared to limonene as well as the available time for reactions indoors. Hence, this study developed a framework to predict the limonene AMF as a function of the ozone [O3] and limonene [lim] concentrations and the air exchange rate (AER, h-1), which is the inverse of the residence time. Empirical AMF data were used to calculate a mixing coefficient, β, that would yield a 'resultant AMF' as the combination of the AMFs due to ozonolysis of one or both of limonene's unsaturated bonds, within the volatility basis set (VBS) organic aerosol framework. Then, β was regressed against predictors of log10([O3]/[lim]) and AER (R2 = 0.74). The β increased as the log10([O3]/[lim]) increased and as AER decreased, having the physical meaning of driving the resultant AMF to the upper AMF condition when both unsaturated bonds of limonene are ozonated. Modeling demonstrates that using the correct resultant AMF to simulate SOA formation owing to limonene ozonolysis is crucial for accurate indoor prediction.
High-quality endoscope reprocessing decreases endoscope contamination.
Decristoforo, P; Kaltseis, J; Fritz, A; Edlinger, M; Posch, W; Wilflingseder, D; Lass-Flörl, C; Orth-Höller, D
2018-02-24
Several outbreaks of severe infections due to contamination of gastrointestinal (GI) endoscopes, mainly duodenoscopes, have been described. The rate of microbial endoscope contamination varies dramatically in literature. The aim of this multicentre prospective study was to evaluate the hygiene quality of endoscopes and automated endoscope reprocessors (AERs) in Tyrol/Austria. In 2015 and 2016, a total of 463 GI endoscopes and 105 AERs from 29 endoscopy centres were analysed by a routine (R) and a combined routine and advanced (CRA) sampling procedure and investigated for microbial contamination by culture-based and molecular-based analyses. The contamination rate of GI endoscopes was 1.3%-4.6% according to the national guideline, suggesting that 1.3-4.6 patients out of 100 could have had contacts with hygiene-relevant microorganisms through an endoscopic intervention. Comparison of R and CRA sampling showed 1.8% of R versus 4.6% of CRA failing the acceptance criteria in phase I and 1.3% of R versus 3.0% of CRA samples failing in phase II. The most commonly identified indicator organism was Pseudomonas spp., mainly Pseudomonas oleovorans. None of the tested viruses were detected in 40 samples. While AERs in phase I failed (n = 9, 17.6%) mainly due to technical faults, phase II revealed lapses (n = 6, 11.5%) only on account of microbial contamination of the last rinsing water, mainly with Pseudomonas spp. In the present study the contamination rate of endoscopes was low compared with results from other European countries, possibly due to the high quality of endoscope reprocessing, drying and storage. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Timbo, Babgaleh B; Chirtel, Stuart J; Ihrie, John; Oladipo, Taiye; Velez-Suarez, Loy; Brewer, Vickery; Mozersky, Robert
2018-05-01
The Food and Drug Administration (FDA)'s Center for Food Safety and Applied Nutrition (CFSAN) oversees the safety of the nation's foods, dietary supplements, and cosmetic products. To present a descriptive analysis of the 2004-2013 dietary supplement adverse event report (AER) data from CAERS and evaluate the 2006 Dietary Supplements and Nonprescription Drug Consumer Protection Act as pertaining to dietary supplements adverse events reporting. We queried CAERS for data from the 2004-2013 AERs specifying at least 1 suspected dietary supplement product. We extracted the product name(s), the symptom(s) reported, age, sex, and serious adverse event outcomes. We examined time trends for mandatory and voluntary reporting and performed analysis using SAS v9.4 and R v3.3.0 software. Of the total AERs (n = 15 430) received from January 1, 2004, through December 31, 2013, indicating at least 1 suspected dietary supplement product, 66.9% were mandatory, 32.2% were voluntary, and 0.9% were both mandatory and voluntary. Reported serious outcomes included death, life-threatening conditions, hospitalizations, congenital anomalies/birth defects and events requiring interventions to prevent permanent impairments (5.1%). The dietary supplement adverse event reporting rate in the United States was estimated at ~2% based on CAERS data. This study characterizes CAERS dietary supplement adverse event data for the 2004-2013 period and estimates a reporting rate of 2% for dietary supplement adverse events based on CAERS data. The findings show that the 2006 Dietary Supplements and Nonprescription Drug Consumer Protection Act had a substantial impact on the reporting of adverse events.
Estimating the extent of reporting to FDA: a case study of statin-associated rhabdomyolysis.
McAdams, Mara; Staffa, Judy; Dal Pan, Gerald
2008-03-01
To estimate the extent of reporting to FDA through statin-associated rhabdomyolysis data. Data included incidence rates (IRs) of hospitalized rhabdomyolysis among statin users from a population-based study, and comparable reported AERS cases and national estimates of statin use from an AERS analysis. Using IRs, national estimates of statin use and average days supply per prescription, we estimated the number of US statin-associated cases of hospitalized rhabdomyolysis. We compared this estimate to the observed number of cases reported to FDA to evaluate the extent of reporting. We repeated this method for atorvastatin, cerivastatin, pravastatin, and simvastatin and statin combinations. We performed sensitivity analyses to check for biases such as misclassification of statin use and cohort selection bias. We evaluated potential time-dependent cerivastatin reporting by a "Dear Health Care Provider (DHCP)" letter. The estimated extent of reporting to FDA varied by statin (atorvastatin, 5.0%; cerivastatin, 31.2%; simvastatin, 14.2%; all four combined, 17.7%; and non-cerivastatin statins combined, 9.9%). No pravastatin-associated cohort cases occurred. Across a reasonable value range, sensitivity analyses did not significantly alter the results; overall the cohort was similar to national statin-users. There was a large increase in AERS reports after the cerivastatin DHCP letter and the estimated extent of reporting increased from 14.8 to 35.0%. The extent of reporting of adverse events to FDA varied by statin and may be influenced by publicity. For statins-associated rhabdomyolysis, the estimated extent of reporting appears to range from 5 to 30% but in the absence of stimulated reporting appears to be 5-15%. Copyright 2008 John Wiley & Sons, Ltd.
The stress-buffering effect of acute exercise: Evidence for HPA axis negative feedback.
Zschucke, Elisabeth; Renneberg, Babette; Dimeo, Fernando; Wüstenberg, Torsten; Ströhle, Andreas
2015-01-01
According to the cross-stressor adaptation hypothesis, physically trained individuals show lower physiological and psychological responses to stressors other than exercise, e.g. psychosocial stress. Reduced stress reactivity may constitute a mechanism of action for the beneficial effects of exercise in maintaining mental health. With regard to neural and psychoneuroendocrine stress responses, the acute stress-buffering effects of exercise have not been investigated yet. A sample of highly trained (HT) and sedentary (SED) young men was randomized to either exercise on a treadmill at moderate intensity (60-70% VO2max; AER) for 30 min, or to perform 30 min of "placebo" exercise (PLAC). 90 min later, an fMRI experiment was conducted using an adapted version of the Montreal Imaging Stress Task (MIST). The subjective and psychoneuroendocrine (cortisol and α-amylase) changes induced by the exercise intervention and the MIST were assessed, as well as neural activations during the MIST. Finally, associations between the different stress responses were analysed. Participants of the AER group showed a significantly reduced cortisol response to the MIST, which was inversely related to the previous exercise-induced α-amylase and cortisol fluctuations. With regard to the sustained BOLD signal, we found higher bilateral hippocampus (Hipp) activity and lower prefrontal cortex (PFC) activity in the AER group. Participants with a higher aerobic fitness showed lower cortisol responses to the MIST. As the Hipp and PFC are brain structures prominently involved in the regulation of the hypothalamus-pituitary-adrenal (HPA) axis, these findings indicate that the acute stress-buffering effect of exercise relies on negative feedback mechanisms. Positive affective changes after exercise appear as important moderators largely accounting for the effects related to physical fitness. Copyright © 2014 Elsevier Ltd. All rights reserved.
Breen, Michael; Xu, Yadong; Schneider, Alexandra; Williams, Ronald; Devlin, Robert
2018-06-01
Air pollution epidemiology studies of ambient fine particulate matter (PM 2.5 ) often use outdoor concentrations as exposure surrogates, which can induce exposure error. The goal of this study was to improve ambient PM 2.5 exposure assessments for a repeated measurements study with 22 diabetic individuals in central North Carolina called the Diabetes and Environment Panel Study (DEPS) by applying the Exposure Model for Individuals (EMI), which predicts five tiers of individual-level exposure metrics for ambient PM 2.5 using outdoor concentrations, questionnaires, weather, and time-location information. Using EMI, we linked a mechanistic air exchange rate (AER) model to a mass-balance PM 2.5 infiltration model to predict residential AER (Tier 1), infiltration factors (F inf_home , Tier 2), indoor concentrations (C in , Tier 3), personal exposure factors (F pex , Tier 4), and personal exposures (E, Tier 5) for ambient PM 2.5 . We applied EMI to predict daily PM 2.5 exposure metrics (Tiers 1-5) for 174 participant-days across the 13 months of DEPS. Individual model predictions were compared to a subset of daily measurements of F pex and E (Tiers 4-5) from the DEPS participants. Model-predicted F pex and E corresponded well to daily measurements with a median difference of 14% and 23%; respectively. Daily model predictions for all 174 days showed considerable temporal and house-to-house variability of AER, F inf_home , and C in (Tiers 1-3), and person-to-person variability of F pex and E (Tiers 4-5). Our study demonstrates the capability of predicting individual-level ambient PM 2.5 exposure metrics for an epidemiological study, in support of improving risk estimation. Copyright © 2018. Published by Elsevier B.V.
XWeB: The XML Warehouse Benchmark
NASA Astrophysics Data System (ADS)
Mahboubi, Hadj; Darmont, Jérôme
With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.
This newsletter reports on the Huber Technology Groups (HTG) high temperature advanced hazardous waste treatment technology capable of very high destruction and removal efficiencies of various hazardous wastes. This newsletter addresses the destruction of PCBs in an EPA certifi...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marck, Steven C. van der, E-mail: vandermarck@nrg.eu
Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), tomore » mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for {sup 6}Li, {sup 7}Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such instances can often be related to nuclear data for specific non-fissile elements, such as C, Fe, or Gd. Indications are that the intermediate and mixed spectrum cases are less well described. The results for the shielding benchmarks are generally good, with very similar results for the three libraries in the majority of cases. Nevertheless there are, in certain cases, strong deviations between calculated and benchmark values, such as for Co and Mg. Also, the results show discrepancies at certain energies or angles for e.g. C, N, O, Mo, and W. The functionality of MCNP6 to calculate the effective delayed neutron fraction yields very good results for all three libraries.« less
Amiodarone-Associated Optic Neuropathy: A Critical Review
Passman, Rod S.; Bennett, Charles L.; Purpura, Joseph M.; Kapur, Rashmi; Johnson, Lenworth N.; Raisch, Dennis W.; West, Dennis P.; Edwards, Beatrice J.; Belknap, Steven M.; Liebling, Dustin B.; Fisher, Mathew J.; Samaras, Athena T.; Jones, Lisa-Gaye A.; Tulas, Katrina-Marie E.; McKoy, June M.
2011-01-01
Although amiodarone is the most commonly prescribed antiarrhythmic drug, its use is limited by serious toxicities, including optic neuropathy. Current reports of amiodarone associated optic neuropathy identified from the Food and Drug Administration's Adverse Event Reporting System (FDA-AERS) and published case reports were reviewed. A total of 296 reports were identified: 214 from AERS, 59 from published case reports, and 23 from adverse events reports for patients enrolled in clinical trials. Mean duration of amiodarone therapy before vision loss was 9 months (range 1-84 months). Insidious onset of amiodarone associated optic neuropathy (44%) was the most common presentation, and nearly one-third were asymptomatic. Optic disc edema was present in 85% of cases. Following drug cessation, 58% had improved visual acuity, 21% were unchanged, and 21% had further decreased visual acuity. Legal blindness (< 20/200) was noted in at least one eye in 20% of cases. Close ophthalmologic surveillance of patients during the tenure of amiodarone administration is warranted. PMID:22385784
Determination of 99Tc in fresh water using TRU resin by ICP-MS.
Guérin, Nicolas; Riopel, Remi; Kramer-Tremblay, Sheila; de Silva, Nimal; Cornett, Jack; Dai, Xiongxin
2017-10-02
Technetium-99 ( 99 Tc) determination at trace level by inductively coupled plasma mass spectrometry (ICP-MS) is challenging because there is no readily available appropriate Tc isotopic tracer. A new method using Re as a recovery tracer to determine 99 Tc in fresh water samples, which does not require any evaporation step, was developed. Tc(VII) and Re(VII) were pre-concentrated on a small anion exchange resin (AER) cartridge from one litre of water sample. They were then efficiently eluted from the AER using a potassium permanganate (KMnO 4 ) solution. After the reduction of KMnO 4 in 2 M sulfuric acid solution, the sample was passed through a small TRU resin cartridge. Tc(VII) and Re(VII) retained on the TRU resin were eluted using near boiling water, which can be directly used for the ICP-MS measurement. The results for method optimisation, validation and application were reported. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Coupling Processes Between Atmospheric Chemistry and Climate
NASA Technical Reports Server (NTRS)
Ko, Malcolm; Weisenstein, Debra; Rodriquez, Jose; Danilin, Michael; Scott, Courtney; Shia, Run-Lie; Eluszkiewicz, Janusz; Sze, Nien-Dak; Stewart, Richard W. (Technical Monitor)
1999-01-01
This is the final report for NAS5-97039 for work performed between December 1996 and November 1999. The overall objective of this project is to improve the understanding of coupling processes among atmospheric chemistry, aerosol and climate, all important for quantitative assessments of global change. Among our priority are changes in ozone and stratospheric sulfate aerosol, with emphasis on how ozone in the lower stratosphere would respond to natural or anthropogenic changes. The work emphasizes two important aspects: (1) AER's continued participation in preparation of, and providing scientific input for, various scientific reports connected with assessment of stratospheric ozone and climate. These include participation in various model intercomparison exercises as well as preparation of national and international reports. (2) Continued development of the AER three-wave interactive model to address how the transport circulation will change as ozone and the thermal properties of the atmosphere change, and assess how these new findings will affect our confidence in the ozone assessment results.
[Real-time PCR in rapid diagnosis of Aeromonas hydrophila necrotizing soft tissue infections].
Kohayagawa, Yoshitaka; Izumi, Yoko; Ushita, Misuzu; Niinou, Norio; Koshizaki, Masayuki; Yamamori, Yuji; Kaneko, Sakae; Fukushima, Hiroshi
2009-11-01
We report a case of rapidly progressive necrotizing soft tissue infection and sepsis followed by a patient's death. We suspected Vibrio vulnificus infection because the patient's underlying disease was cirrhosis and the course extremely rapid. No microbe had been detected at death. We extracted DNA from a blood culture bottle. SYBR green I real-time PCR was conducted but could not detect V. vulnificus vvh in the DNA sample. Aeromonas hydrophila was cultured and identified in blood and necrotized tissue samples. Real-time PCR was conducted to detect A. hydrophila ahh1, AHCYTOEN and aerA in the DNA sample extracted from the blood culture bottle and an isolated necrotized tissue strain, but only ahh1 was positive. High-mortality in necrotizing soft tissue infections makes it is crucial to quickly detect V. vulnificus and A. hydrophila. We found real-time PCR for vvh, ahh1, AHCYTOEN, and aerA useful in detecting V. vulnificus and A. hydrophila in necrotizing soft tissue infections.
Microphysical and Optical Properties of Saharan Dust Measured during the ICE-D Aircraft Campaign
NASA Astrophysics Data System (ADS)
Ryder, Claire; Marenco, Franco; Brooke, Jennifer; Cotton, Richard; Taylor, Jonathan
2017-04-01
During August 2015, the UK FAAM BAe146 research aircraft was stationed in Cape Verde off the coast of West Africa. Measurements of Saharan dust, and ice and liquid water clouds, were taken for the ICE-D (Ice in Clouds Experiment - Dust) project - a multidisciplinary project aimed at further understanding aerosol-cloud interactions. Six flights formed part of a sub-project, AER-D, solely focussing on measurements of Saharan dust within the African dust plume. Dust loadings observed during these flights varied (aerosol optical depths of 0.2 to 1.3), as did the vertical structure of the dust, the size distributions and the optical properties. The BAe146 was fully equipped to measure size distributions covering aerosol accumulation, coarse and giant modes. Initial results of size distribution and optical properties of dust from the AER-D flights will be presented, showing that a substantial coarse mode was present, in agreement with previous airborne measurements. Optical properties of dust relating to the measured size distributions will also be presented.
Continued development and validation of the AER two-dimensional interactive model
NASA Technical Reports Server (NTRS)
Ko, M. K. W.; Sze, N. D.; Shia, R. L.; Mackay, M.; Weisenstein, D. K.; Zhou, S. T.
1996-01-01
Results from two-dimensional chemistry-transport models have been used to predict the future behavior of ozone in the stratosphere. Since the transport circulation, temperature, and aerosol surface area are fixed in these models, they cannot account for the effects of changes in these quantities, which could be modified because of ozone redistribution and/or other changes in the troposphere associated with climate changes. Interactive two-dimensional models, which calculate the transport circulation and temperature along with concentrations of the chemical species, could provide answers to complement the results from three-dimension model calculations. In this project, we performed the following tasks in pursuit of the respective goals: (1) We continued to refine the 2-D chemistry-transport model; (2) We developed a microphysics model to calculate the aerosol loading and its size distribution; (3) The treatment of physics in the AER 2-D interactive model were refined in the following areas--the heating rate in the troposphere, and wave-forcing from propagation of planetary waves.
Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)
2002-01-01
The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.
Ellis, Judith
2006-07-01
The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.
IMAGESEER - IMAGEs for Education and Research
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Grubb, Thomas; Milner, Barbara
2012-01-01
IMAGESEER is a new Web portal that brings easy access to NASA image data for non-NASA researchers, educators, and students. The IMAGESEER Web site and database are specifically designed to be utilized by the university community, to enable teaching image processing (IP) techniques on NASA data, as well as to provide reference benchmark data to validate new IP algorithms. Along with the data and a Web user interface front-end, basic knowledge of the application domains, benchmark information, and specific NASA IP challenges (or case studies) are provided.
A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.
Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin
2015-12-01
Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.
Development of an Automated Emergency Response System (AERS) for Rail Transit Systems
DOT National Transportation Integrated Search
1984-10-01
As a result of a fire in 1979 at the Bay Area Rapid Transit District (BART), a microprocessor-based information retrieval system was developed to aid in the emergency decision-making process. This system was proposed, designed and programmed by a sup...
Building Innovation: Learning with Technologies. Australian Education Review Number 56
ERIC Educational Resources Information Center
Moyle, Kathryn
2010-01-01
Australian Education Review (AER) 56 explores national and international policy priorities for building students' innovation capabilities through information and communication technologies (ICT) in Australian schools. Section 1 sets out the Australian policy context for digital education and highlights some of the emerging challenges. It provides…
CASE HISTORY OF FINE PORE DIFFUSER RETROFIT AT RIDGEWOOD, NEW JERSEY
In April 1983, the Ridgewood, New Jersey Wastewater Treatment Plant underwent a retrofit from a coarse bubble to a fine pore aeration system. Also, process modification from contact stabilization to tapered aeration occurred. This report presents a case history of plant and aer...
Poluzzi, Elisabetta; Raschi, Emanuel; Motola, Domenico; Moretti, Ugo; De Ponti, Fabrizio
2010-04-01
Drug-induced torsades de pointes (TdP) is a complex regulatory and clinical problem due to the rarity of this sometimes fatal adverse event. In this context, the US FDA Adverse Event Reporting System (AERS) is an important source of information, which can be applied to the analysis of TdP liability of marketed drugs. To critically evaluate the risk of antimicrobial-induced TdP by detecting alert signals in the AERS, on the basis of both quantitative and qualitative analyses. Reports of TdP from January 2004 through December 2008 were retrieved from the public version of the AERS. The absolute number of cases and reporting odds ratio as a measure of disproportionality were evaluated for each antimicrobial drug (quantitative approach). A list of drugs with suspected TdP liability (provided by the Arizona Centre of Education and Research on Therapeutics [CERT]) was used as a reference to define signals. In a further analysis, to refine signal detection, we identified TdP cases without co-medications listed by Arizona CERT (qualitative approach). Over the 5-year period, 374 reports of TdP were retrieved: 28 antibacterials, 8 antifungals, 1 antileprosy and 26 antivirals were involved. Antimicrobials more frequently reported were levofloxacin (55) and moxifloxacin (37) among the antibacterials, fluconazole (47) and voriconazole (17) among the antifungals, and lamivudine (8) and nelfinavir (6) among the antivirals. A significant disproportionality was observed for 17 compounds, including several macrolides, fluoroquinolones, linezolid, triazole antifungals, caspofungin, indinavir and nelfinavir. With the qualitative approach, we identified the following additional drugs or fixed dose combinations, characterized by at least two TdP cases without co-medications listed by Arizona CERT: ceftriaxone, piperacillin/tazobactam, cotrimoxazole, metronidazole, ribavirin, lamivudine and lopinavir/ritonavir. Disproportionality for macrolides, fluoroquinolones and most of the azole antifungals should be viewed as 'expected' according to Arizona CERT list. By contrast, signals were generated by linezolid, caspofungin, posaconazole, indinavir and nelfinavir. Drugs detected only by the qualitative approach should be further investigated by increasing the sensitivity of the method, e.g. by searching also for the TdP surrogate marker, prolongation of the QT interval. The freely available version of the FDA AERS database represents an important source to detect signals of TdP. In particular, our analysis generated five signals among antimicrobials for which further investigations and active surveillance are warranted. These signals should be considered in evaluating the benefit-risk profile of these drugs.
Developing of Indicators of an E-Learning Benchmarking Model for Higher Education Institutions
ERIC Educational Resources Information Center
Sae-Khow, Jirasak
2014-01-01
This study was the development of e-learning indicators used as an e-learning benchmarking model for higher education institutes. Specifically, it aimed to: 1) synthesize the e-learning indicators; 2) examine content validity by specialists; and 3) explore appropriateness of the e-learning indicators. Review of related literature included…
Mathematics Content Standards Benchmarks and Performance Standards
ERIC Educational Resources Information Center
New Mexico Public Education Department, 2008
2008-01-01
New Mexico Mathematics Content Standards, Benchmarks, and Performance Standards identify what students should know and be able to do across all grade levels, forming a spiraling framework in the sense that many skills, once introduced, develop over time. While the Performance Standards are set forth at grade-specific levels, they do not exist as…
ERIC Educational Resources Information Center
Kroll, Juidith A.
2012-01-01
The inaugural Advancement Investment Metrics Study, or AIMS, benchmarked investments and staffing in each of the advancement disciplines (advancement services, alumni relations, communications and marketing, fundraising and advancement management) as well as the return on the investment in fundraising specifically. This white paper reports on the…
NASA Astrophysics Data System (ADS)
Rohrer, Brandon
2010-12-01
Measuring progress in the field of Artificial General Intelligence (AGI) can be difficult without commonly accepted methods of evaluation. An AGI benchmark would allow evaluation and comparison of the many computational intelligence algorithms that have been developed. In this paper I propose that a benchmark for natural world interaction would possess seven key characteristics: fitness, breadth, specificity, low cost, simplicity, range, and task focus. I also outline two benchmark examples that meet most of these criteria. In the first, the direction task, a human coach directs a machine to perform a novel task in an unfamiliar environment. The direction task is extremely broad, but may be idealistic. In the second, the AGI battery, AGI candidates are evaluated based on their performance on a collection of more specific tasks. The AGI battery is designed to be appropriate to the capabilities of currently existing systems. Both the direction task and the AGI battery would require further definition before implementing. The paper concludes with a description of a task that might be included in the AGI battery: the search and retrieve task.
Bonnet, F; Solignac, S; Marty, J
2008-03-01
The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.
The Roots of Individuality: Brain Waves and Perception.
ERIC Educational Resources Information Center
Rosenfeld, Anne H.; Rosenfeld, Sam A.
Described is research using computer techniques to study the brain's perceptual systems in both normal and pathological groups, including hyperactive children (6-12 years old). Reviewed are the early studies of A. Petrie, M. Buchsbaum, and J. Silverman using the electroencephalograph to obtain AER (average evoked response) records of…
Sensitivity of Geoelectrical Measurements to the Presence of Bacteria in Porous Media
We investigated the sensitivity of low frequency electrical measurements (0.1-1000 Hz) to (i) microbial cell density, (ii) live and dead cells, and (iii) microbial attachment onto mineral surfaces of clean quartz sands and iron oxide coated sands. Three strains of Pseudomonas aer...
75 FR 56086 - Combined Notice of Filings # 1
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-15
.... Applicants: Iberdrola Renewables; Shiloh I Wind Project LLC; Dillon Wind LLC; Dry Lake Wind Power, LLC... Operator submits tariff filing per 35.13(a)(2)(iii): Site/Interconnection Agreements between O&R and AER NY... Commission encourages electronic submission of protests and interventions in lieu of paper, using the FERC...
A Study of Semantic Features: Electrophysiological Correlates.
ERIC Educational Resources Information Center
Wetzel, Frederick; And Others
This study investigates whether words differing in a single contrastive semantic feature (positive/negative) can be discriminated by auditory evoked responses (AERs). Ten right-handed college students were provided with auditory stimuli consisting of 20 relational words (more/less; high/low, etc.) spoken with a middle American accent and computer…
Previous exposure assessment panel studies have observed considerable seasonal, between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure ...
Epidemiological studies frequently use central site concentrations as surrogates of exposure to air pollutants. Variability in air pollutant infiltration due to differential air exchange rates (AERs) is potentially a major factor affecting the relationship between central site c...
IT-benchmarking of clinical workflows: concept, implementation, and evaluation.
Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula
2014-01-01
Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project.
Generation of openEHR Test Datasets for Benchmarking.
El Helou, Samar; Karvonen, Tuukka; Yamamoto, Goshiro; Kume, Naoto; Kobayashi, Shinji; Kondo, Eiji; Hiragi, Shusuke; Okamoto, Kazuya; Tamura, Hiroshi; Kuroda, Tomohiro
2017-01-01
openEHR is a widely used EHR specification. Given its technology-independent nature, different approaches for implementing openEHR data repositories exist. Public openEHR datasets are needed to conduct benchmark analyses over different implementations. To address their current unavailability, we propose a method for generating openEHR test datasets that can be publicly shared and used.
A Seafloor Benchmark for 3-dimensional Geodesy
NASA Astrophysics Data System (ADS)
Chadwell, C. D.; Webb, S. C.; Nooner, S. L.
2014-12-01
We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone. Using a ROV to place and remove sensors on the benchmarks will significantly reduce the number of sensors required by the community to monitor offshore strain in subduction zones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, C G; Mathews, S
2006-09-07
Current regulatory schemes use generic or industrial sector specific benchmarks to evaluate the quality of industrial stormwater discharges. While benchmarks can be a useful tool for facility stormwater managers in evaluating the quality stormwater runoff, benchmarks typically do not take into account site-specific conditions, such as: soil chemistry, atmospheric deposition, seasonal changes in water source, and upstream land use. Failing to account for these factors may lead to unnecessary costs to trace a source of natural variation, or potentially missing a significant local water quality problem. Site-specific water quality thresholds, established upon the statistical evaluation of historic data take intomore » account these factors, are a better tool for the direct evaluation of runoff quality, and a more cost-effective trigger to investigate anomalous results. Lawrence Livermore National Laboratory (LLNL), a federal facility, established stormwater monitoring programs to comply with the requirements of the industrial stormwater permit and Department of Energy orders, which require the evaluation of the impact of effluent discharges on the environment. LLNL recognized the need to create a tool to evaluate and manage stormwater quality that would allow analysts to identify trends in stormwater quality and recognize anomalous results so that trace-back and corrective actions could be initiated. LLNL created the site-specific water quality threshold tool to better understand the nature of the stormwater influent and effluent, to establish a technical basis for determining when facility operations might be impacting the quality of stormwater discharges, and to provide ''action levels'' to initiate follow-up to analytical results. The threshold criteria were based on a statistical analysis of the historic stormwater monitoring data and a review of relevant water quality objectives.« less
Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen
2017-01-01
Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children’s strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders’ NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children’s NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders’ NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children’s benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children’s age and familiarity with the number range, these additional external benchmarks might need to be labeled. PMID:28713302
Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen
2017-01-01
Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children's age and familiarity with the number range, these additional external benchmarks might need to be labeled.
Research on computer systems benchmarking
NASA Technical Reports Server (NTRS)
Smith, Alan Jay (Principal Investigator)
1996-01-01
This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.
This study was designed to provide understanding of the toxicity of naturally occurring asbestos (NOA) including Libby amphibole (LA), Sumas Mountain chrysotile (SM), El Dorado Hills tremolite (ED) and Ontario ferroactinolite cleavage fragments (ON). Rat-respirable fractions (aer...
ERIC Educational Resources Information Center
Tytler, Russell
2007-01-01
Australian Education Review (AER) 51 elaborates on issues raised by the Australian Council for Educational Research (ACER) Research Conference 2006: "Boosting Science Learning--What Will It Take?" It challenges current orthodoxies in science education and proposes a re-imagining that charts new directions for science teaching and…
between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure and Dose Simulation (SHEDS) model is a population exposure model that uses a pro...
ERIC Educational Resources Information Center
Keogh, Jayne; Garvis, Susanne; Pendergast, Donna; Diamond, Pat
2012-01-01
The intensification process associated with the first year of teaching has a significant impact on beginning teachers' personal and professional lives. This paper uses a narrative approach to investigate the electronic conversations of 16 beginning teachers on a self-initiated group email site. The participants' electronic exchanges demonstrated…
A neuro-inspired spike-based PID motor controller for multi-motor robots with low cost FPGAs.
Jimenez-Fernandez, Angel; Jimenez-Moreno, Gabriel; Linares-Barranco, Alejandro; Dominguez-Morales, Manuel J; Paz-Vicente, Rafael; Civit-Balcells, Anton
2012-01-01
In this paper we present a neuro-inspired spike-based close-loop controller written in VHDL and implemented for FPGAs. This controller has been focused on controlling a DC motor speed, but only using spikes for information representation, processing and DC motor driving. It could be applied to other motors with proper driver adaptation. This controller architecture represents one of the latest layers in a Spiking Neural Network (SNN), which implements a bridge between robotics actuators and spike-based processing layers and sensors. The presented control system fuses actuation and sensors information as spikes streams, processing these spikes in hard real-time, implementing a massively parallel information processing system, through specialized spike-based circuits. This spike-based close-loop controller has been implemented into an AER platform, designed in our labs, that allows direct control of DC motors: the AER-Robot. Experimental results evidence the viability of the implementation of spike-based controllers, and hardware synthesis denotes low hardware requirements that allow replicating this controller in a high number of parallel controllers working together to allow a real-time robot control.
A Neuro-Inspired Spike-Based PID Motor Controller for Multi-Motor Robots with Low Cost FPGAs
Jimenez-Fernandez, Angel; Jimenez-Moreno, Gabriel; Linares-Barranco, Alejandro; Dominguez-Morales, Manuel J.; Paz-Vicente, Rafael; Civit-Balcells, Anton
2012-01-01
In this paper we present a neuro-inspired spike-based close-loop controller written in VHDL and implemented for FPGAs. This controller has been focused on controlling a DC motor speed, but only using spikes for information representation, processing and DC motor driving. It could be applied to other motors with proper driver adaptation. This controller architecture represents one of the latest layers in a Spiking Neural Network (SNN), which implements a bridge between robotics actuators and spike-based processing layers and sensors. The presented control system fuses actuation and sensors information as spikes streams, processing these spikes in hard real-time, implementing a massively parallel information processing system, through specialized spike-based circuits. This spike-based close-loop controller has been implemented into an AER platform, designed in our labs, that allows direct control of DC motors: the AER-Robot. Experimental results evidence the viability of the implementation of spike-based controllers, and hardware synthesis denotes low hardware requirements that allow replicating this controller in a high number of parallel controllers working together to allow a real-time robot control. PMID:22666004
Cheel, José; Minceva, Mirjana; Urajová, Petra; Aslam, Rabya; Hrouzek, Pavel; Kopecký, Jiří
2015-10-01
Aeruginosin-865 was isolated from cultivated soil cyanobacteria using a combination of centrifugal partition chromatography (CPC) and gel permeation chromatography. The solubility of Aer-865 in different solvents was evaluated using the conductor-like screening model for real solvents (COSMO-RS). The CPC separation was performed in descending mode with a biphasic solvent system composed of water-n-BuOH-acetic acid (5:4:1, v/v/v). The upper phase was used as a stationary phase, whereas the lower phase was employed as a mobile phase at a flow rate of 10 mL/min. The revolution speed and temperature of the separation column were 1700 rpm and 25 degrees C, respectively. Preparative CPC separation followed by gel permeation chromatography was performed on 50 mg of crude extract yielding Aer-865 (3.5 mg), with a purity over 95% as determined by HPLC. The chemical identity of the isolated compound was confirmed by comparing its spectroscopic data (UV, HRESI-MS, HRESI-MS/MS) with those of an authentic standard and data available in the literature.
Robust and Opportunistic Autonomous Science for a Potential Titan Aerobot
NASA Technical Reports Server (NTRS)
Gaines, Daniel M.; Estlin, Tara; Schaffer, Steve; Castano, Rebecca; Elfes, Alberto
2010-01-01
We are developing onboard planning and execution technologies to provide robust and opportunistic mission operations for a potential Titan aerobot. Aerobot have the potential for collecting a vast amount of high priority science data. However, to be effective, an aerobot must address several challenges including communication constraints, extended periods without contact with Earth, uncertain and changing environmental conditions, maneuverability constraints and potentially short-lived science opportunities. We are developing the AerOASIS system to develop and test technology to support autonomous science operations for a potential Titan Aerobot. The planning and execution component of AerOASIS is able to generate mission operations plans that achieve science and engineering objectives while respecting mission and resource constraints as well as adapting the plan to respond to new science opportunities. Our technology leverages prior work on the OASIS system for autonomous rover exploration. In this paper we describe how the OASIS planning component was adapted to address the unique challenges of a Titan Aerobot and we describe a field demonstration of the system with the JPL prototype aerobot.
NASA Astrophysics Data System (ADS)
Liu, Jian; Li, Jia; Cheng, Xu; Wang, Huaming
2018-02-01
In this paper, the process of coating AerMet100 steel on forged 300M steel with laser cladding was investigated, with a thorough analysis of the chemical composition, microstructure, and hardness of the substrate and the cladding layer as well as the transition zone. Results show that the composition and microhardness of the cladding layer are macroscopically homogenous with the uniformly distributed bainite and a small amount of retained austenite in martensite matrix. The transition zone, which spans approximately 100 μm, yields a gradual change of composition from the cladding layer to 300M steel matrix. The heat-affected zone (HAZ) can be divided into three zones: the sufficiently quenched zone (SQZ), the insufficiently quenched zone (IQZ), and the high tempered zone (HTZ). The SQZ consists of martensitic matrix and bainite, as for the IQZ and the HTZ the microstructures are martensite + tempered martensite and tempered martensite + ferrite, respectively. These complicated microstructures in the HAZ are caused by different peak heating temperatures and heterogeneous microstructures of the as-received 300M steel.
Enhancement Approachof Object Constraint Language Generation
NASA Astrophysics Data System (ADS)
Salemi, Samin; Selamat, Ali
2018-01-01
OCL is the most prevalent language to document system constraints that are annotated in UML. Writing OCL specifications is not an easy task due to the complexity of the OCL syntax. Therefore, an approach to help and assist developers to write OCL specifications is needed. There are two approaches to do so: First, creating an OCL specifications by a tool called COPACABANA. Second, an MDA-based approach to help developers in writing OCL specification by another tool called NL2OCLviaSBVR that generates OCL specification automatically. This study presents another MDA-based approach called En2OCL, and its objective is twofold. 1- to improve the precison of the existing works. 2- to present a benchmark of these approaches. The benchmark shows that the accuracy of COPACABANA, NL2OCLviaSBVR, and En2OCL are 69.23, 84.64, and 88.40 respectively.
Restelli, Michela; Lopardo, Teresa; Lo Iacono, Nadia; Garaffo, Giulia; Conte, Daniele; Rustighi, Alessandra; Napoli, Marco; Del Sal, Giannino; Perez-Morga, David; Costanzo, Antonio; Merlo, Giorgio Roberto; Guerrini, Luisa
2014-01-01
Ectrodactyly, or Split-Hand/Foot Malformation (SHFM), is a congenital condition characterized by the loss of central rays of hands and feet. The p63 and the DLX5;DLX6 transcription factors, expressed in the embryonic limb buds and ectoderm, are disease genes for these conditions. Mutations of p63 also cause the ectodermal dysplasia–ectrodactyly–cleft lip/palate (EEC) syndrome, comprising SHFM. Ectrodactyly is linked to defects of the apical ectodermal ridge (AER) of the developing limb buds. FGF8 is the key signaling molecule in this process, able to direct proximo-distal growth and patterning of the skeletal primordial of the limbs. In the limb buds of both p63 and Dlx5;Dlx6 murine models of SHFM, the AER is poorly stratified and FGF8 expression is severely reduced. We show here that the FGF8 locus is a downstream target of DLX5 and that FGF8 counteracts Pin1–ΔNp63α interaction. In vivo, lack of Pin1 leads to accumulation of the p63 protein in the embryonic limbs and ectoderm. We show also that ΔNp63α protein stability is negatively regulated by the interaction with the prolyl-isomerase Pin1, via proteasome-mediated degradation; p63 mutant proteins associated with SHFM or EEC syndromes are resistant to Pin1 action. Thus, DLX5, p63, Pin1 and FGF8 participate to the same time- and location-restricted regulatory loop essential for AER stratification, hence for normal patterning and skeletal morphogenesis of the limb buds. These results shed new light on the molecular mechanisms at the basis of the SHFM and EEC limb malformations. PMID:24569166
Effects of High-Intensity Interval Exercise Training on Skeletal Myopathy of Chronic Heart Failure.
Tzanis, Georgios; Philippou, Anastassios; Karatzanos, Eleftherios; Dimopoulos, Stavros; Kaldara, Elisavet; Nana, Emmeleia; Pitsolis, Theodoros; Rontogianni, Dimitra; Koutsilieris, Michael; Nanas, Serafim
2017-01-01
It remains controversial which type of exercise elicits optimum adaptations on skeletal myopathy of heart failure (HF). Our aim was to evaluate the effect of high-intensity interval training (HIIT), with or without the addition of strength training, on skeletal muscle of HF patients. Thirteen male HF patients (age 51 ± 13 years, body mass index 27 ± 4 kg/m 2 ) participated in either an HIIT (AER) or an HIIT combined with strength training (COM) 3-month program. Biopsy samples were obtained from the vastus lateralis. Analyses were performed on muscle fiber type, cross-section area (CSA), capillary density, and mRNA expression of insulin-like growth factor (IGF) 1 isoforms (ie, IGF-1Ea, IGF-1Eb, IGF-1Ec), type-1 receptor (IGF-1R), and binding protein 3 (IGFBP-3). Increased expression of IGF-1Ea, IGF-1Eb, IGF-1Ec, and IGFBP-3 transcripts was found (1.7 ± 0.8, 1.5 ± 0.8, 2.0 ± 1.32.4 ± 1.4 fold changes, respectively; P < .05). Type I fibers increased by 21% (42 ± 10% to 51 ± 7%; P < .001) and capillary/fiber ratio increased by 24% (1.27 ± 0.22 to 1.57 ± 0.41; P = .005) in both groups as a whole. Fibers' mean CSA increased by 10% in total, but the increase in type I fibers' CSA was greater after AER than COM (15% vs 6%; P < .05). The increased CSA correlated with the increased expression of IGF-1Ea and IGF-1Εb. HIIT reverses skeletal myopathy of HF patients, with the adaptive responses of the IGF-1 bioregulation system possibly contributing to these effects. AER program seemed to be superior to COM to induce muscle hypertrophy. Copyright © 2016 Elsevier Inc. All rights reserved.
Drugs Associated with More Suicidal Ideations Are also Associated with More Suicide Attempts
Robertson, Henry T.; Allison, David B.
2009-01-01
Context In randomized controlled trials (RCTs), some drugs, including CB1 antagonists for obesity treatment, have been shown to cause increased suicidal ideation. A key question is whether drugs that increase or are associated with increased suicidal ideations are also associated with suicidal behavior, or whether drug–induced suicidal ideations are unlinked epiphenomena that do not presage the more troubling and potentially irrevocable outcome of suicidal behavior. This is difficult to determine in RCTs because of the rarity of suicidal attempts and completions. Objective To determine whether drugs associated with more suicidal ideations are also associated with more suicide attempts in large spontaneous adverse event (AE) report databases. Methodology Generalized linear models with negative binomial distribution were fitted to Food and Drug Administration (FDA) Adverse Event (AE) Reporting System (AERS) data from 2004 to 2008. A total of 1,404,470 AEs from 832 drugs were analyzed as a function of reports of suicidal ideations; other non-suicidal adverse reactions; drug class; proportion of reports from males; and average age of subject for which AE was filed. Drug was treated as the unit of analysis, thus the statistical models effectively had 832 observations. Main Outcome Measures Reported suicide attempts and completed suicides per drug. Results 832 drugs, ranging from abacavir to zopiclone, were evaluated. The 832 drugs, as primary suspect drugs in a given adverse event, accounted for over 99.9% of recorded AERS. Suicidal ideations had a significant positive association with suicide attempts (p<.0001) and had an approximately 131-fold stronger magnitude of association than non-suicidal AERs, after adjusting for drug class, gender, and age. Conclusions In AE reports, drugs that are associated with increased suicidal ideations are also associated with increased suicidal attempts or completions. This association suggests that drug-induced suicidal ideations observed in RCTs plausibly represent harbingers that presage the more serious suicide attempts and completions and should be a cause for concern. PMID:19798416
Vieux-Rochas, Maxence; Bouhali, Kamal; Mantero, Stefano; Garaffo, Giulia; Provero, Paolo; Astigiano, Simonetta; Barbieri, Ottavia; Caratozzolo, Mariano F.; Tullo, Apollonia; Guerrini, Luisa; Lallemand, Yvan; Robert, Benoît
2013-01-01
The Dlx and Msx homeodomain transcription factors play important roles in the control of limb development. The combined disruption of Msx1 and Msx2, as well as that of Dlx5 and Dlx6, lead to limb patterning defects with anomalies in digit number and shape. Msx1;Msx2 double mutants are characterized by the loss of derivatives of the anterior limb mesoderm which is not observed in either of the simple mutants. Dlx5;Dlx6 double mutants exhibit hindlimb ectrodactyly. While the morphogenetic action of Msx genes seems to involve the BMP molecules, the mode of action of Dlx genes still remains elusive. Here, examining the limb phenotypes of combined Dlx and Msx mutants we reveal a new Dlx-Msx regulatory loop directly involving BMPs. In Msx1;Dlx5;Dlx6 triple mutant mice (TKO), beside the expected ectrodactyly, we also observe the hallmark morphological anomalies of Msx1;Msx2 double mutants suggesting an epistatic role of Dlx5 and Dlx6 over Msx2. In Msx2;Dlx5;Dlx6 TKO mice we only observe an aggravation of the ectrodactyly defect without changes in the number of the individual components of the limb. Using a combination of qPCR, ChIP and bioinformatic analyses, we identify two Dlx/Msx regulatory pathways: 1) in the anterior limb mesoderm a non-cell autonomous Msx-Dlx regulatory loop involves BMP molecules through the AER and 2) in AER cells and, at later stages, in the limb mesoderm the regulation of Msx2 by Dlx5 and Dlx6 occurs also cell autonomously. These data bring new elements to decipher the complex AER-mesoderm dialogue that takes place during limb development and provide clues to understanding the etiology of congenital limb malformations. PMID:23382810
Gañan, Y; Macias, D; Basco, R D; Merino, R; Hurle, J M
1998-04-01
The formation of the digits in amniota embryos is accompanied by apoptotic cell death of the interdigital mesoderm triggered through BMP signaling. Differences in the intensity of this apoptotic process account for the establishment of the different morphological types of feet observed in amniota (i.e., free-digits, webbed digits, lobulated digits). The molecular basis accounting for the differential pattern of interdigital cell death remains uncertain since the reduction of cell death in species with webbed digits is not accompanied by a parallel reduction in the pattern of expression of bmp genes in the interdigital regions. In this study we show that the duck interdigital web mesoderm exhibits an attenuated response to both BMP-induced apoptosis and TGFbeta-induced chondrogenesis in comparison with species with free digits. The attenuated response to these signals is accompanied by a reduced pattern of expression of msx-1 and msx-2 genes. Local application of FGF in the duck interdigit expands the domain of msx-2 expression but not the domain of msx-1 expression. This change in the expression of msx-2 is followed by a parallel increase in spontaneous and exogenous BMP-induced interdigital cell death, while the chondrogenic response to TGFbetas is unchanged. The regression of AER, as deduced by the pattern of extinction of fgf-8 expression, takes place in a similar fashion in the chick and duck regardless of the differences in interdigital cell death and msx gene expression. Implantation of BMP-beads in the distal limb mesoderm induces AER regression in both the chick and duck. This finding suggests an additional role for BMPs in the physiological regression of the AER. It is proposed that the formation of webbed vs free-digit feet in amniota results from a premature differentiation of the interdigital mesoderm into connective tissue caused by a reduced expression of msx genes in the developing autopod. Copyright 1998 Academic Press.
Vieux-Rochas, Maxence; Bouhali, Kamal; Mantero, Stefano; Garaffo, Giulia; Provero, Paolo; Astigiano, Simonetta; Barbieri, Ottavia; Caratozzolo, Mariano F; Tullo, Apollonia; Guerrini, Luisa; Lallemand, Yvan; Robert, Benoît; Levi, Giovanni; Merlo, Giorgio R
2013-01-01
The Dlx and Msx homeodomain transcription factors play important roles in the control of limb development. The combined disruption of Msx1 and Msx2, as well as that of Dlx5 and Dlx6, lead to limb patterning defects with anomalies in digit number and shape. Msx1;Msx2 double mutants are characterized by the loss of derivatives of the anterior limb mesoderm which is not observed in either of the simple mutants. Dlx5;Dlx6 double mutants exhibit hindlimb ectrodactyly. While the morphogenetic action of Msx genes seems to involve the BMP molecules, the mode of action of Dlx genes still remains elusive. Here, examining the limb phenotypes of combined Dlx and Msx mutants we reveal a new Dlx-Msx regulatory loop directly involving BMPs. In Msx1;Dlx5;Dlx6 triple mutant mice (TKO), beside the expected ectrodactyly, we also observe the hallmark morphological anomalies of Msx1;Msx2 double mutants suggesting an epistatic role of Dlx5 and Dlx6 over Msx2. In Msx2;Dlx5;Dlx6 TKO mice we only observe an aggravation of the ectrodactyly defect without changes in the number of the individual components of the limb. Using a combination of qPCR, ChIP and bioinformatic analyses, we identify two Dlx/Msx regulatory pathways: 1) in the anterior limb mesoderm a non-cell autonomous Msx-Dlx regulatory loop involves BMP molecules through the AER and 2) in AER cells and, at later stages, in the limb mesoderm the regulation of Msx2 by Dlx5 and Dlx6 occurs also cell autonomously. These data bring new elements to decipher the complex AER-mesoderm dialogue that takes place during limb development and provide clues to understanding the etiology of congenital limb malformations.
GROWTH OF THE INTERNATIONAL CRITICALITY SAFETY AND REACTOR PHYSICS EXPERIMENT EVALUATION PROJECTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Blair Briggs; John D. Bess; Jim Gulliford
2011-09-01
Since the International Conference on Nuclear Criticality Safety (ICNC) 2007, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) have continued to expand their efforts and broaden their scope. Eighteen countries participated on the ICSBEP in 2007. Now, there are 20, with recent contributions from Sweden and Argentina. The IRPhEP has also expanded from eight contributing countries in 2007 to 16 in 2011. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments1' have increased from 442 evaluations (38000 pages), containing benchmark specifications for 3955 critical ormore » subcritical configurations to 516 evaluations (nearly 55000 pages), containing benchmark specifications for 4405 critical or subcritical configurations in the 2010 Edition of the ICSBEP Handbook. The contents of the Handbook have also increased from 21 to 24 criticality-alarm-placement/shielding configurations with multiple dose points for each, and from 20 to 200 configurations categorized as fundamental physics measurements relevant to criticality safety applications. Approximately 25 new evaluations and 150 additional configurations are expected to be added to the 2011 edition of the Handbook. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Reactor Physics Benchmark Experiments2' have increased from 16 different experimental series that were performed at 12 different reactor facilities to 53 experimental series that were performed at 30 different reactor facilities in the 2011 edition of the Handbook. Considerable effort has also been made to improve the functionality of the searchable database, DICE (Database for the International Criticality Benchmark Evaluation Project) and verify the accuracy of the data contained therein. DICE will be discussed in separate papers at ICNC 2011. The status of the ICSBEP and the IRPhEP will be discussed in the full paper, selected benchmarks that have been added to the ICSBEP Handbook will be highlighted, and a preview of the new benchmarks that will appear in the September 2011 edition of the Handbook will be provided. Accomplishments of the IRPhEP will also be highlighted and the future of both projects will be discussed. REFERENCES (1) International Handbook of Evaluated Criticality Safety Benchmark Experiments, NEA/NSC/DOC(95)03/I-IX, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), September 2010 Edition, ISBN 978-92-64-99140-8. (2) International Handbook of Evaluated Reactor Physics Benchmark Experiments, NEA/NSC/DOC(2006)1, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), March 2011 Edition, ISBN 978-92-64-99141-5.« less
Benchmark Dataset for Whole Genome Sequence Compression.
C L, Biji; S Nair, Achuthsankar
2017-01-01
The research in DNA data compression lacks a standard dataset to test out compression tools specific to DNA. This paper argues that the current state of achievement in DNA compression is unable to be benchmarked in the absence of such scientifically compiled whole genome sequence dataset and proposes a benchmark dataset using multistage sampling procedure. Considering the genome sequence of organisms available in the National Centre for Biotechnology and Information (NCBI) as the universe, the proposed dataset selects 1,105 prokaryotes, 200 plasmids, 164 viruses, and 65 eukaryotes. This paper reports the results of using three established tools on the newly compiled dataset and show that their strength and weakness are evident only with a comparison based on the scientifically compiled benchmark dataset. The sample dataset and the respective links are available @ https://sourceforge.net/projects/benchmarkdnacompressiondataset/.
Benchmarking nitrogen removal suspended-carrier biofilm systems using dynamic simulation.
Vanhooren, H; Yuan, Z; Vanrolleghem, P A
2002-01-01
We are witnessing an enormous growth in biological nitrogen removal from wastewater. It presents specific challenges beyond traditional COD (carbon) removal. A possibility for optimised process design is the use of biomass-supporting media. In this paper, attached growth processes (AGP) are evaluated using dynamic simulations. The advantages of these systems that were qualitatively described elsewhere, are validated quantitatively based on a simulation benchmark for activated sludge treatment systems. This simulation benchmark is extended with a biofilm model that allows for fast and accurate simulation of the conversion of different substrates in a biofilm. The economic feasibility of this system is evaluated using the data generated with the benchmark simulations. Capital savings due to volume reduction and reduced sludge production are weighed out against increased aeration costs. In this evaluation, effluent quality is integrated as well.
ERIC Educational Resources Information Center
Thomas, Allyson
2015-01-01
This study examined the relationship between student engagement and graduation rates between Black/African American students and White (Non-Hispanic) students in their senior year at faith-based institutions in the southeastern region of the United States using the NSSE benchmarks of effective educational practices. Specifically, scores from the…
A Privacy-Preserving Platform for User-Centric Quantitative Benchmarking
NASA Astrophysics Data System (ADS)
Herrmann, Dominik; Scheuer, Florian; Feustel, Philipp; Nowey, Thomas; Federrath, Hannes
We propose a centralised platform for quantitative benchmarking of key performance indicators (KPI) among mutually distrustful organisations. Our platform offers users the opportunity to request an ad-hoc benchmarking for a specific KPI within a peer group of their choice. Architecture and protocol are designed to provide anonymity to its users and to hide the sensitive KPI values from other clients and the central server. To this end, we integrate user-centric peer group formation, exchangeable secure multi-party computation protocols, short-lived ephemeral key pairs as pseudonyms, and attribute certificates. We show by empirical evaluation of a prototype that the performance is acceptable for reasonably sized peer groups.
Child-Resistant Packaging for E-Liquid: A Review of US State Legislation.
Frey, Leslie T; Tilburg, William C
2016-02-01
A growing number of states have introduced or enacted legislation requiring child-resistant packaging for e-liquid containers; however, these laws involve varying terms, packaging standards, and enforcement provisions, raising concerns about their effectiveness. We evaluated bills against 4 benchmarks: broad product definitions that contemplate future developments in the market, citations to a specific packaging standard, stated penalties for violations, and express grants of authority to a state entity to enforce the packaging requirements. Our findings showed that 3 states meet all 4 benchmarks in their enacted legislation. We encourage states to consider these benchmarks when revising statutes or drafting future legislation.
Child-Resistant Packaging for E-Liquid: A Review of US State Legislation
Tilburg, William C.
2016-01-01
A growing number of states have introduced or enacted legislation requiring child-resistant packaging for e-liquid containers; however, these laws involve varying terms, packaging standards, and enforcement provisions, raising concerns about their effectiveness. We evaluated bills against 4 benchmarks: broad product definitions that contemplate future developments in the market, citations to a specific packaging standard, stated penalties for violations, and express grants of authority to a state entity to enforce the packaging requirements. Our findings showed that 3 states meet all 4 benchmarks in their enacted legislation. We encourage states to consider these benchmarks when revising statutes or drafting future legislation. PMID:26691114
NASA Astrophysics Data System (ADS)
Jacques, Diederik
2017-04-01
As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different environmental and geo-engineering applications. SeSBench will organize new workshops to add new benchmarks in a new special issue. Steefel, C. I., et al. (2015). "Reactive transport codes for subsurface environmental simulation." Computational Geosciences 19: 445-478.
SU-E-T-148: Benchmarks and Pre-Treatment Reviews: A Study of Quality Assurance Effectiveness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowenstein, J; Nguyen, H; Roll, J
Purpose: To determine the impact benchmarks and pre-treatment reviews have on improving the quality of submitted clinical trial data. Methods: Benchmarks are used to evaluate a site’s ability to develop a treatment that meets a specific protocol’s treatment guidelines prior to placing their first patient on the protocol. A pre-treatment review is an actual patient placed on the protocol in which the dosimetry and contour volumes are evaluated to be per protocol guidelines prior to allowing the beginning of the treatment. A key component of these QA mechanisms is that sites are provided timely feedback to educate them on howmore » to plan per the protocol and prevent protocol deviations on patients accrued to a protocol. For both benchmarks and pre-treatment reviews a dose volume analysis (DVA) was performed using MIM softwareTM. For pre-treatment reviews a volume contour evaluation was also performed. Results: IROC Houston performed a QA effectiveness analysis of a protocol which required both benchmarks and pre-treatment reviews. In 70 percent of the patient cases submitted, the benchmark played an effective role in assuring that the pre-treatment review of the cases met protocol requirements. The 35 percent of sites failing the benchmark subsequently modified there planning technique to pass the benchmark before being allowed to submit a patient for pre-treatment review. However, in 30 percent of the submitted cases the pre-treatment review failed where the majority (71 percent) failed the DVA. 20 percent of sites submitting patients failed to correct their dose volume discrepancies indicated by the benchmark case. Conclusion: Benchmark cases and pre-treatment reviews can be an effective QA tool to educate sites on protocol guidelines and to minimize deviations. Without the benchmark cases it is possible that 65 percent of the cases undergoing a pre-treatment review would have failed to meet the protocols requirements.Support: U24-CA-180803.« less
The Arts and Australian Education: Realising Potential. Australian Education Review No. 58
ERIC Educational Resources Information Center
Ewing, Robyn
2011-01-01
Australian Education Review (AER) 58 surveys the international and national research on the role and effect of arts-rich programming in schools and in the broader community, and examines the policies and practices that inhibit or support these initiatives. It puts the case that embedding the Arts in learning would be a powerful catalyst for…
Research in Modeling and Simulation for Airspace Systems Innovation
NASA Technical Reports Server (NTRS)
Ballin, Mark G.; Kimmel, William M.; Welch, Sharon S.
2007-01-01
This viewgraph presentation provides an overview of some of the applied research and simulation methodologies at the NASA Langley Research Center that support aerospace systems innovation. Risk assessment methodologies, complex systems design and analysis methodologies, and aer ospace operations simulations are described. Potential areas for future research and collaboration using interactive and distributed simula tions are also proposed.
A low-cost, scalable, current-sensing digital headstage for high channel count μECoG.
Trumpis, Michael; Insanally, Michele; Zou, Jialin; Elsharif, Ashraf; Ghomashchi, Ali; Sertac Artan, N; Froemke, Robert C; Viventi, Jonathan
2017-04-01
High channel count electrode arrays allow for the monitoring of large-scale neural activity at high spatial resolution. Implantable arrays featuring many recording sites require compact, high bandwidth front-end electronics. In the present study, we investigated the use of a small, light weight, and low cost digital current-sensing integrated circuit for acquiring cortical surface signals from a 61-channel micro-electrocorticographic (μECoG) array. We recorded both acute and chronic μECoG signal from rat auditory cortex using our novel digital current-sensing headstage. For direct comparison, separate recordings were made in the same anesthetized preparations using an analog voltage headstage. A model of electrode impedance explained the transformation between current- and voltage-sensed signals, and was used to reconstruct cortical potential. We evaluated the digital headstage using several metrics of the baseline and response signals. The digital current headstage recorded neural signal with similar spatiotemporal statistics and auditory frequency tuning compared to the voltage signal. The signal-to-noise ratio of auditory evoked responses (AERs) was significantly stronger in the current signal. Stimulus decoding based on true and reconstructed voltage signals were not significantly different. Recordings from an implanted system showed AERs that were detectable and decodable for 52 d. The reconstruction filter mitigated the thermal current noise of the electrode impedance and enhanced overall SNR. We developed and validated a novel approach to headstage acquisition that used current-input circuits to independently digitize 61 channels of μECoG measurements of the cortical field. These low-cost circuits, intended to measure photo-currents in digital imaging, not only provided a signal representing the local cortical field with virtually the same sensitivity and specificity as a traditional voltage headstage but also resulted in a small, light headstage that can easily be scaled to record from hundreds of channels.
A low-cost, scalable, current-sensing digital headstage for high channel count μECoG
NASA Astrophysics Data System (ADS)
Trumpis, Michael; Insanally, Michele; Zou, Jialin; Elsharif, Ashraf; Ghomashchi, Ali; Sertac Artan, N.; Froemke, Robert C.; Viventi, Jonathan
2017-04-01
Objective. High channel count electrode arrays allow for the monitoring of large-scale neural activity at high spatial resolution. Implantable arrays featuring many recording sites require compact, high bandwidth front-end electronics. In the present study, we investigated the use of a small, light weight, and low cost digital current-sensing integrated circuit for acquiring cortical surface signals from a 61-channel micro-electrocorticographic (μECoG) array. Approach. We recorded both acute and chronic μECoG signal from rat auditory cortex using our novel digital current-sensing headstage. For direct comparison, separate recordings were made in the same anesthetized preparations using an analog voltage headstage. A model of electrode impedance explained the transformation between current- and voltage-sensed signals, and was used to reconstruct cortical potential. We evaluated the digital headstage using several metrics of the baseline and response signals. Main results. The digital current headstage recorded neural signal with similar spatiotemporal statistics and auditory frequency tuning compared to the voltage signal. The signal-to-noise ratio of auditory evoked responses (AERs) was significantly stronger in the current signal. Stimulus decoding based on true and reconstructed voltage signals were not significantly different. Recordings from an implanted system showed AERs that were detectable and decodable for 52 d. The reconstruction filter mitigated the thermal current noise of the electrode impedance and enhanced overall SNR. Significance. We developed and validated a novel approach to headstage acquisition that used current-input circuits to independently digitize 61 channels of μECoG measurements of the cortical field. These low-cost circuits, intended to measure photo-currents in digital imaging, not only provided a signal representing the local cortical field with virtually the same sensitivity and specificity as a traditional voltage headstage but also resulted in a small, light headstage that can easily be scaled to record from hundreds of channels.
A low-cost, scalable, current-sensing digital headstage for high channel count μECoG
Trumpis, Michael; Insanally, Michele; Zou, Jialin; Elsharif, Ashraf; Ghomashchi, Ali; Artan, N. Sertac; Froemke, Robert C.; Viventi, Jonathan
2017-01-01
Objective High channel count electrode arrays allow for the monitoring of large-scale neural activity at high spatial resolution. Implantable arrays featuring many recording sites require compact, high bandwidth front-end electronics. In the present study, we investigated the use of a small, light weight, and low cost digital current-sensing integrated circuit for acquiring cortical surface signals from a 61-channel micro-electrocorticographic (μECoG) array. Approach We recorded both acute and chronic μECoG signal from rat auditory cortex using our novel digital current-sensing headstage. For direct comparison, separate recordings were made in the same anesthetized preparations using an analog voltage headstage. A model of electrode impedance explained the transformation between current- and voltage-sensed signals, and was used to reconstruct cortical potential. We evaluated the digital headstage using several metrics of the baseline and response signals. Main results The digital current headstage recorded neural signal with similar spatiotemporal statistics and auditory frequency tuning compared to the voltage signal. The signal-to-noise ratio of auditory evoked responses (AERs) was significantly stronger in the current signal. Stimulus decoding based on true and reconstructed voltage signals were not significantly different. Recordings from an implanted system showed AERs that were detectable and decodable for 52 days. The reconstruction filter mitigated the thermal current noise of the electrode impedance and enhanced overall SNR. Significance We developed and validated a novel approach to headstage acquisition that used current-input circuits to independently digitize 61 channels of μECoG measurements of the cortical field. These low-cost circuits, intended to measure photo-currents in digital imaging, not only provided a signal representing the local cortical field with virtually the same sensitivity and specificity as a traditional voltage headstage but also resulted in a small, light headstage that can easily be scaled to record from hundreds of channels. PMID:28102827
Hammad, Ahmed M; Moustafa, Alaa-Eldin H; Mansour, Maha M; Fahmy, Bashier M; Hamada, Mohamed G; Shimamoto, Toshi; Shimamoto, Tadashi
2018-06-01
The aim of this study was to determine the public health significance of hemolytic Aeromonas species isolated from 213 food samples in Egypt, based on their virulence and antimicrobial-resistance potential. We recovered 63 strains, isolated from fish, raw milk, karish cheeses, and ras cheese in 29 (31.18%) of 93, 10 (25.00%) of 40, 13 (32.50%) of 40, and 11 (27.50%) of 40 samples, respectively. The most prevalent virulence gene was alt (50.79%), followed by aerA (34.92%), asa1 (39.68%), ahh1 (20.63%), act (11.11%), and ast (3.17%). Thirteen strains screened in this study carried no hemolysin gene, but only the alt gene, and another eight hemolytic strains screened, carried no virulence gene. The virulence signatures " ahh1+ aerA" and " alt+ act," in which the genes interact synergistically to induce severe diarrhea, were detected in two and four strains, respectively. Most showed resistance to third-generation cephalosporins, aztreonam, and imipenem, which indicates the complexity of the β-lactamase production in our hemolytic Aeromonas strains. Fourteen (22.22%) of 63 strains carried one or more antimicrobial-resistance markers, including the bla CTX-M , bla TEM , tet(A), tet(E), and intI1 genes, which were detected in 6.34, 3.17, 3.17, 4.76, and 14.28% of isolates, respectively. In conclusion, the majority of hemolytic Aeromonas strains isolated from the intestinal contents of healthy fish and naturally contaminated milk and cheeses were not commensal but had developed multidrug-resistance and virulence profiles, indicating an emerging potential health risk. Importantly, screening for certain hemolysin genes may not be reliable in predicting the pathogenic potential of Aeromonas species and, thereby, the safety of analyzed foods. Our findings indicate that specific criteria are required for the phenotypic and molecular analysis of Aeromonas species in food items, particularly those eaten without further treatment, to ensure their safety.
Becic, Tina; Kero, Darko; Vukojevic, Katarina; Mardesic, Snjezana; Saraga-Babic, Mirna
2018-04-01
The expression pattern of fibroblast growth factors FGF8 and FGF2 and their receptor FGFR1, transcription factors MSX-1 and MSX-2, as well as cell proliferation (Ki-67) and cell death associated caspase-3, p19 and RIP5 factors were analyzed in histological sections of eight 4th-9th-weeks developing human limbs by immunohistochemistry and semi-thin sectioning. Increasing expression of all analyzed factors (except FGF8) characterized both the multilayered human apical ectodermal ridge (AER), sub-ridge mesenchyme (progress zone) and chondrocytes in developing human limbs. While cytoplasmic co-expression of MSX-1 and MSX-2 was observed in both limb epithelium and mesenchyme, p19 displayed strong cytoplasmic expression in non-proliferating cells. Nuclear expression of Ki-67 proliferating cells, and partly of MSX-1 and MSX-2 was detected in the whole limb primordium. Strong expression of factors p19 and RIP5, both in the AER and mesenchyme of human developing limbs indicates their possible involvement in control of cell senescence and cell death. In contrast to animal studies, expression of FGFR1 in the surface ectoderm and p19 in the whole limb primordium might reflect interspecies differences in limb morphology. Expression of FGF2 and downstream RIP5 gene, and transcription factors Msx-1 and MSX-2 did not show human-specific changes in expression pattern. Based on their spatio-temporal expression during human limb development, our study indicates role of FGFs and Msx genes in stimulation of cell proliferation, limb outgrowth, digit elongation and separation, and additionally MSX-2 in control of vasculogenesis. The cascade of orchestrated gene expressions, including the analyzed developmental factors, jointly contribute to the complex human limb development. Copyright © 2018 Elsevier GmbH. All rights reserved.
Beauchamp, Kyle A; Behr, Julie M; Rustenburg, Ariën S; Bayly, Christopher I; Kroenlein, Kenneth; Chodera, John D
2015-10-08
Atomistic molecular simulations are a powerful way to make quantitative predictions, but the accuracy of these predictions depends entirely on the quality of the force field employed. Although experimental measurements of fundamental physical properties offer a straightforward approach for evaluating force field quality, the bulk of this information has been tied up in formats that are not machine-readable. Compiling benchmark data sets of physical properties from non-machine-readable sources requires substantial human effort and is prone to the accumulation of human errors, hindering the development of reproducible benchmarks of force-field accuracy. Here, we examine the feasibility of benchmarking atomistic force fields against the NIST ThermoML data archive of physicochemical measurements, which aggregates thousands of experimental measurements in a portable, machine-readable, self-annotating IUPAC-standard format. As a proof of concept, we present a detailed benchmark of the generalized Amber small-molecule force field (GAFF) using the AM1-BCC charge model against experimental measurements (specifically, bulk liquid densities and static dielectric constants at ambient pressure) automatically extracted from the archive and discuss the extent of data available for use in larger scale (or continuously performed) benchmarks. The results of even this limited initial benchmark highlight a general problem with fixed-charge force fields in the representation low-dielectric environments, such as those seen in binding cavities or biological membranes.
Benchmarking: a method for continuous quality improvement in health.
Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe
2012-05-01
Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less
Development of risk-based nanomaterial groups for occupational exposure control
NASA Astrophysics Data System (ADS)
Kuempel, E. D.; Castranova, V.; Geraci, C. L.; Schulte, P. A.
2012-09-01
Given the almost limitless variety of nanomaterials, it will be virtually impossible to assess the possible occupational health hazard of each nanomaterial individually. The development of science-based hazard and risk categories for nanomaterials is needed for decision-making about exposure control practices in the workplace. A possible strategy would be to select representative (benchmark) materials from various mode of action (MOA) classes, evaluate the hazard and develop risk estimates, and then apply a systematic comparison of new nanomaterials with the benchmark materials in the same MOA class. Poorly soluble particles are used here as an example to illustrate quantitative risk assessment methods for possible benchmark particles and occupational exposure control groups, given mode of action and relative toxicity. Linking such benchmark particles to specific exposure control bands would facilitate the translation of health hazard and quantitative risk information to the development of effective exposure control practices in the workplace. A key challenge is obtaining sufficient dose-response data, based on standard testing, to systematically evaluate the nanomaterials' physical-chemical factors influencing their biological activity. Categorization processes involve both science-based analyses and default assumptions in the absence of substance-specific information. Utilizing data and information from related materials may facilitate initial determinations of exposure control systems for nanomaterials.
Benchmarking image fusion system design parameters
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2013-06-01
A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.
Continuous quality improvement for the clinical decision unit.
Mace, Sharon E
2004-01-01
Clinical decision units (CDUs) are a relatively new and growing area of medicine in which patients undergo rapid evaluation and treatment. Continuous quality improvement (CQI) is important for the establishment and functioning of CDUs. CQI in CDUs has many advantages: better CDU functioning, fulfillment of Joint Commission on Accreditation of Healthcare Organizations mandates, greater efficiency/productivity, increased job satisfaction, better performance improvement, data availability, and benchmarking. Key elements include a database with volume indicators, operational policies, clinical practice protocols (diagnosis specific/condition specific), monitors, benchmarks, and clinical pathways. Examples of these important parameters are given. The CQI process should be individualized for each CDU and hospital.
Closed-Loop Neuromorphic Benchmarks
Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris
2015-01-01
Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820
Stratification of unresponsive patients by an independently validated index of brain complexity
Casarotto, Silvia; Comanducci, Angela; Rosanova, Mario; Sarasso, Simone; Fecchio, Matteo; Napolitani, Martino; Pigorini, Andrea; G. Casali, Adenauer; Trimarchi, Pietro D.; Boly, Melanie; Gosseries, Olivia; Bodart, Olivier; Curto, Francesco; Landi, Cristina; Mariotti, Maurizio; Devalle, Guya; Laureys, Steven; Tononi, Giulio
2016-01-01
Objective Validating objective, brain‐based indices of consciousness in behaviorally unresponsive patients represents a challenge due to the impossibility of obtaining independent evidence through subjective reports. Here we address this problem by first validating a promising metric of consciousness—the Perturbational Complexity Index (PCI)—in a benchmark population who could confirm the presence or absence of consciousness through subjective reports, and then applying the same index to patients with disorders of consciousness (DOCs). Methods The benchmark population encompassed 150 healthy controls and communicative brain‐injured subjects in various states of conscious wakefulness, disconnected consciousness, and unconsciousness. Receiver operating characteristic curve analysis was performed to define an optimal cutoff for discriminating between the conscious and unconscious conditions. This cutoff was then applied to a cohort of noncommunicative DOC patients (38 in a minimally conscious state [MCS] and 43 in a vegetative state [VS]). Results We found an empirical cutoff that discriminated with 100% sensitivity and specificity between the conscious and the unconscious conditions in the benchmark population. This cutoff resulted in a sensitivity of 94.7% in detecting MCS and allowed the identification of a number of unresponsive VS patients (9 of 43) with high values of PCI, overlapping with the distribution of the benchmark conscious condition. Interpretation Given its high sensitivity and specificity in the benchmark and MCS population, PCI offers a reliable, independently validated stratification of unresponsive patients that has important physiopathological and therapeutic implications. In particular, the high‐PCI subgroup of VS patients may retain a capacity for consciousness that is not expressed in behavior. Ann Neurol 2016;80:718–729 PMID:27717082
Talaminos-Barroso, Alejandro; Estudillo-Valderrama, Miguel A; Roa, Laura M; Reina-Tosina, Javier; Ortega-Ruiz, Francisco
2016-06-01
M2M (Machine-to-Machine) communications represent one of the main pillars of the new paradigm of the Internet of Things (IoT), and is making possible new opportunities for the eHealth business. Nevertheless, the large number of M2M protocols currently available hinders the election of a suitable solution that satisfies the requirements that can demand eHealth applications. In the first place, to develop a tool that provides a benchmarking analysis in order to objectively select among the most relevant M2M protocols for eHealth solutions. In the second place, to validate the tool with a particular use case: the respiratory rehabilitation. A software tool, called Distributed Computing Framework (DFC), has been designed and developed to execute the benchmarking tests and facilitate the deployment in environments with a large number of machines, with independence of the protocol and performance metrics selected. DDS, MQTT, CoAP, JMS, AMQP and XMPP protocols were evaluated considering different specific performance metrics, including CPU usage, memory usage, bandwidth consumption, latency and jitter. The results obtained allowed to validate a case of use: respiratory rehabilitation of chronic obstructive pulmonary disease (COPD) patients in two scenarios with different types of requirement: Home-Based and Ambulatory. The results of the benchmark comparison can guide eHealth developers in the choice of M2M technologies. In this regard, the framework presented is a simple and powerful tool for the deployment of benchmark tests under specific environments and conditions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rimov, A. A.; Chukanova, T. I.; Trofimov, Yu. V.
2016-12-01
Data on the comparative analysis variants of the quality of power installations (benchmarking) applied in the power industry is systematized. It is shown that the most efficient variant of implementation of the benchmarking technique is the analysis of statistical distributions of the indicators in the composed homogenous group of the uniform power installations. The benchmarking technique aimed at revealing the available reserves on improvement of the reliability and heat efficiency indicators of the power installations of the thermal power plants is developed in the furtherance of this approach. The technique provides a possibility of reliable comparison of the quality of the power installations in their homogenous group limited by the number and adoption of the adequate decision on improving some or other technical characteristics of this power installation. The technique provides structuring of the list of the comparison indicators and internal factors affecting them represented according to the requirements of the sectoral standards and taking into account the price formation characteristics in the Russian power industry. The mentioned structuring ensures traceability of the reasons of deviation of the internal influencing factors from the specified values. The starting point for further detail analysis of the delay of the certain power installation indicators from the best practice expressed in the specific money equivalent is positioning of this power installation on distribution of the key indicator being a convolution of the comparison indicators. The distribution of the key indicator is simulated by the Monte-Carlo method after receiving the actual distributions of the comparison indicators: specific lost profit due to the short supply of electric energy and short delivery of power, specific cost of losses due to the nonoptimal expenditures for repairs, and specific cost of excess fuel equivalent consumption. The quality loss indicators are developed facilitating the analysis of the benchmarking results permitting to represent the quality loss of this power installation in the form of the difference between the actual value of the key indicator or comparison indicator and the best quartile of the existing distribution. The uncertainty of the obtained values of the quality loss indicators was evaluated by transforming the standard uncertainties of the input values into the expanded uncertainties of the output values with the confidence level of 95%. The efficiency of the technique is demonstrated in terms of benchmarking of the main thermal and mechanical equipment of the extraction power-generating units T-250 and power installations of the thermal power plants with the main steam pressure 130 atm.
INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom; Javier Ortensi; Sonat Sen
2013-09-01
The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less
Touching the Future: Building Skills for Life and Work. Australian Education Review No. 55
ERIC Educational Resources Information Center
Wyn, Johanna
2009-01-01
Australian Education Review (AER) 55 explores the goals of Australian education and of how schools should prepare young people for work and life. Section 1 provides an overview, discussing the nature of broad social and economic changes over the last 20 years, and their implications for the goals of Australian education systems today and for the…
Tropical Cyclone Report, 1986.
1986-01-01
Gunzelman, USN CAPT Robert F. Crosby, USAF LT Harry S. Gatanis, USN *AG2 Kristopher W. Buttermore, USN TSGT Marguritta H. Smith, USAF *AG2 Kevin L. Cobb...dollars~ wuyur~v to a ’ U~se- rx nif, or est ,A te city. of’ ~ was seotrong nn5xa rttengsiv Iaag dito, n n.. N wAer ~z lins fr prted names. iedn eatthan
A Study of the Radio-Frequency Radiation from the Sun
1949-07-13
8217 ^ DISTRIBUTION GNR Attn: Code 470 BuOrd BuAer Attn; Code TD -4 BuShips CNO Attn: OP-413 CO, ONR, Boston Dir., USNEL CDR., USNOTS Attn: Reports...OD rH »H ^H o o o o o ^ rH fH iH rl K ä 5 1^1» (> O ^ CM flj t^- LA >0 -O
NASA Astrophysics Data System (ADS)
Trisna, B. N.; Budayasa, I. K.; Siswono, T. Y. E.
2018-01-01
Metacognition is related to improving student learning outcomes. This study describes students’ metacognitive activities in solving the combinatorics problem. Two undergraduate students of mathematics education from STKIP PGRI Banjarmasin were selected as the participants of the study, one person has a holist cognitive style and the other a serialist. Data were collected by task-based interviews where the task contains a combinatorial problem. The interviews were conducted twice using equivalent problem at two different times. The study found that the participants showed metacognitive awareness (A), metacognitive evaluation (E), and metacognitive regulation (R) that operated as pathways from one function to another. Both, holist and serialist, have metacognitive activities in different pathway. The path of metacognitive activities of the holist is AERCAE-AAEER-ACRECCECC-AREERCE with the AERAE-AER-ARE-ARERE pattern, while the path of metacognitive activities of the serialist is AERCA-AAER-ACRERCERC-AREEEE with the AERA-AER-ARERER-ARE pattern. As an implication of these findings, teachers/lecturers need to pay attention to metacognitive awareness when they begin a stage in mathematical problem solving. Teachers/lecturers need to emphasize to students that in mathematical problem solving, processes and results are equally important.
First evidence of subclinical renal tubular injury during sickle-cell crisis.
Audard, Vincent; Moutereau, Stéphane; Vandemelebrouck, Gaetana; Habibi, Anoosha; Khellaf, Mehdi; Grimbert, Philippe; Levy, Yves; Loric, Sylvain; Renaud, Bertrand; Lang, Philippe; Godeau, Bertrand; Galactéros, Frédéric; Bartolucci, Pablo
2014-04-29
The pathophysiologic mechanisms classically involved in sickle-cell nephropathy include endothelial dysfunction and vascular occlusion. Arguments demonstrating that ischemia-reperfusion injury-related kidney damage might coincide with vaso-occlusive crisis (VOC) are lacking. In this prospective study, we sought to determine whether tubular cells and glomerular permeability might be altered during VOC. Urine neutrophil gelatinase-associated lipocalin (NGAL) levels and albumin-excretion rates (AER) of 25 patients were evaluated prospectively during 25 VOC episodes and compared to their steady state (ST) values. During VOC, white blood-cell counts (WBC) and C-reactive protein (CRP) were significantly higher than at ST but creatinine levels were comparable. Urine NGAL levels were significantly increased during VOC vs ST (P = 0.007) and remained significant when normalized to urine creatinine (P = 0.004), while AER did not change significantly. The higher urine NGAL concentration was not associated with subsequent (24-48 hour) acute kidney injury. Univariate analysis identified no significant correlations between urine NGAL levels and laboratory parameters during VOC. These results demonstrated that subclinical ischemia-reperfusion tubular injury is common during VOC and highlight the importance of hydroelectrolyte monitoring and correction during VOC.
Wan, J; Wilcock, A; Coventry, M J
1998-02-01
Basil essential oils, including basil sweet linalool (BSL) and basil methyl chavicol (BMC), were screened for antimicrobial activity against a range of Gram-positive and Gram-negative bacteria, yeasts and moulds using an agar well diffusion method. Both essential oils showed antimicrobial activity against most of the micro-organisms examined except Clostridium sporogenes, Flavimonas oryzihabitans, and three species of Pseudomonas. The minimum inhibitory concentration (MIC) of BMC against Aeromonas hydrophila and Pseudomonas fluorescens in TSYE broth (as determined using an indirect impedance method) was 0.125 and 2% (v/v), respectively; the former was not greatly affected by the increase of challenge inoculum from 10(3) to 10(6) cfu ml-1. Results with resting cells demonstrated that BMC was bactericidal to both Aer. hydrophila and Ps. fluorescens. The growth of Aer. hydrophila in filter-sterilized lettuce extract was completely inhibited by 0.1% (v/v) BMC whereas that of Ps. fluorescens was not significantly affected by 1% (v/v) BMC. In addition, the effectiveness of washing fresh lettuce with 0.1 or 1% (v/v) BMC on survival of natural microbial flora was comparable with that effected by 125 ppm chlorine.
Usein, C R; Damian, M; Tatu-Chitoiu, D; Capusa, C; Fagaras, R; Tudorache, D; Nica, M; Le Bouguénec, C
2001-01-01
A total of 78 E. coli strains isolated from adults with different types of urinary tract infections were screened by polymerase chain reaction for prevalence of genetic regions coding for virulence factors. The targeted genetic determinants were those coding for type 1 fimbriae (fimH), pili associated with pyelonephritis (pap), S and F1C fimbriae (sfa and foc), afimbrial adhesins (afa), hemolysin (hly), cytotoxic necrotizing factor (cnf), aerobactin (aer). Among the studied strains, the prevalence of genes coding for fimbrial adhesive systems was 86%, 36%, and 23% for fimH, pap, and sfa/foc,respectively. The operons coding for Afa afimbrial adhesins were identified in 14% of strains. The hly and cnf genes coding for toxins were amplified in 23% and 13% of strains, respectively. A prevalence of 54% was found for the aer gene. The various combinations of detected genes were designated as virulence patterns. The strains isolated from the hospitalized patients displayed a greater number of virulence genes and a diversity of gene associations compared to the strains isolated from the ambulatory subjects. A rapid assessment of the bacterial pathogenicity characteristics may contribute to a better medical approach of the patients with urinary tract infections.
Bailey, Steven; Singh, Ajay; Azadian, Robert; Huber, Peter; Blum, Michael
2010-02-01
The use of data mining has increased among regulators and pharmaceutical companies. The incremental value of data mining as an adjunct to traditional pharmacovigilance methods has yet to be demonstrated. Specifically, the utility in identifying new safety signals and the resources required to do so have not been elucidated. To analyse the number and types of disproportionately reported product-event combinations (DRPECs), as well as the final disposition of each, in order to understand the potential utility and resource implications of routinely conducting data mining in the US FDA Adverse Event Reporting System (AERS). We generated DRPECs from AERS for six of Wyeth's products, prospectively tracked their dispositions and evaluated the appropriate DRPECs in the company's safety database. We chose EB05 (the lower bound of the 90% confidence interval around the Empirical Bayes Geometric Mean) > or =2 as the appropriate metric, employing stratification based on age, sex and year of report. A total of 861 DRPECs were identified - the average number of DRPECs was 144 per product. The proportion of unique preferred terms (PTs) in AERS for each drug with an EB05 > or =2 was similar across the six products (5.1-8.5%). Overall, 64.0% (551) of the DRPECs were closed after the initial screening (44.8% labelled, 14.3% indication related, 4.9% non-interpretable). An additional 9.9% (85) had been reviewed within the prior year and were not further reviewed. The remaining 26.1% (225) required full case review. After review of all pertinent reports and additional data, it was determined which of the DRPECs necessitated a formal review by the company's ongoing Safety Review Team (SRT) process. In total, 3.6% (31/861) of the DRPECs, yielding 16 medical concepts, were reviewed by the SRT, leading to seven labelling changes. These labelling changes involved 1.9% of all DRPECs generated. Four of the six compounds reviewed as part of this pilot had an identified labelling change. The workload required for this pilot, which was driven primarily by those DRPECs requiring review, was extensive, averaging 184 hours per product. The number of DRPECs identified for each drug approximately correlated with the number of unique PTs in the database. Over one-half of DRPECs were either labelled as per the company's reference safety information (RSI) or were under review after identification by traditional pharmacovigilance activities, suggesting that for marketed products these methods do identify adverse events detected by traditional pharmacovigilance methods. Approximately three-quarters of the 861 DRPECs identified were closed without case review after triage. Of the approximately one-quarter of DRPECs that required formal case review, seven resulted in an addition to the RSI for the relevant products. While this pilot does not allow us to comment on the utility of routine data mining for all products, it is significant that several new safety concepts were identified through this prospective exercise.
Metastable Autoionizing States of Molecules and Radicals in Highly Energetic Environment
2016-03-22
electronic states. The specific aims are to develop and calibrate complex-scaled equation-of-motion coupled cluster (cs-EOM- CC ) and CAP (complex...absorbing potential) augmented EOM- CC methods. We have implemented and benchmarked cs-EOM-CCSD and CAP- augmented EOM-CCSD methods for excitation energies...motion coupled cluster (cs-EOM- CC ) and CAP (complex absorbing potential) augmented EOM- CC methods. We have implemented and benchmarked cs-EOM-CCSD and
The US EPA National Center for Environmental Assessment has developed a methodology to derive acute inhalation toxicity benchmarks, called acute reference exposures (AREs), for noncancer effects. The methodology provides guidance for the derivation of chemical-specific benchmark...
Benchmarking: A Method for Continuous Quality Improvement in Health
Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe
2012-01-01
Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166
Hatfield, Mark D; Ashton, Carol M; Bass, Barbara L; Shirkey, Beverly A
2016-02-01
Methods to assess a surgeon's individual performance based on clinically meaningful outcomes have not been fully developed, due to small numbers of adverse outcomes and wide variation in case volumes. The Achievable Benchmark of Care (ABC) method addresses these issues by identifying benchmark-setting surgeons with high levels of performance and greater case volumes. This method was used to help surgeons compare their surgical practice to that of their peers by using merged National Surgical Quality Improvement Program (NSQIP) and Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program (MBSAQIP) data to generate surgeon-specific reports. A retrospective cohort study at a single institution's department of surgery was conducted involving 107 surgeons (8,660 cases) over 5.5 years. Stratification of more than 32,000 CPT codes into 16 CPT clusters served as the risk adjustment. Thirty-day outcomes of interest included surgical site infection (SSI), acute kidney injury (AKI), and mortality. Performance characteristics of the ABC method were explored by examining how many surgeons were identified as benchmark-setters in view of volume and outcome rates within CPT clusters. For the data captured, most surgeons performed cases spanning a median of 5 CPT clusters (range 1 to 15 clusters), with a median of 26 cases (range 1 to 776 cases) and a median of 2.8 years (range 0 to 5.5 years). The highest volume surgeon for that CPT cluster set the benchmark for 6 of 16 CPT clusters for SSIs, 8 of 16 CPT clusters for AKIs, and 9 of 16 CPT clusters for mortality. The ABC method appears to be a sound and useful approach to identifying benchmark-setting surgeons within a single institution. Such surgeons may be able to help their peers improve their performance. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Chemotherapy Extravasation: Establishing a National Benchmark for Incidence Among Cancer Centers.
Jackson-Rose, Jeannette; Del Monte, Judith; Groman, Adrienne; Dial, Linda S; Atwell, Leah; Graham, Judy; O'Neil Semler, Rosemary; O'Sullivan, Maryellen; Truini-Pittman, Lisa; Cunningham, Terri A; Roman-Fischetti, Lisa; Costantinou, Eileen; Rimkus, Chris; Banavage, Adrienne J; Dietz, Barbara; Colussi, Carol J; Catania, Kimberly; Wasko, Michelle; Schreffler, Kevin A; West, Colleen; Siefert, Mary Lou; Rice, Robert David
2017-08-01
Given the high-risk nature and nurse sensitivity of chemotherapy infusion and extravasation prevention, as well as the absence of an industry benchmark, a group of nurses studied oncology-specific nursing-sensitive indicators. . The purpose was to establish a benchmark for the incidence of chemotherapy extravasation with vesicants, irritants, and irritants with vesicant potential. . Infusions with actual or suspected extravasations of vesicant and irritant chemotherapies were evaluated. Extravasation events were reviewed by type of agent, occurrence by drug category, route of administration, level of harm, follow-up, and patient referrals to surgical consultation. . A total of 739,812 infusions were evaluated, with 673 extravasation events identified. Incidence for all extravasation events was 0.09%.
Ontology for Semantic Data Integration in the Domain of IT Benchmarking.
Pfaff, Matthias; Neubig, Stefan; Krcmar, Helmut
2018-01-01
A domain-specific ontology for IT benchmarking has been developed to bridge the gap between a systematic characterization of IT services and their data-based valuation. Since information is generally collected during a benchmark exercise using questionnaires on a broad range of topics, such as employee costs, software licensing costs, and quantities of hardware, it is commonly stored as natural language text; thus, this information is stored in an intrinsically unstructured form. Although these data form the basis for identifying potentials for IT cost reductions, neither a uniform description of any measured parameters nor the relationship between such parameters exists. Hence, this work proposes an ontology for the domain of IT benchmarking, available at https://w3id.org/bmontology. The design of this ontology is based on requirements mainly elicited from a domain analysis, which considers analyzing documents and interviews with representatives from Small- and Medium-Sized Enterprises and Information and Communications Technology companies over the last eight years. The development of the ontology and its main concepts is described in detail (i.e., the conceptualization of benchmarking events, questionnaires, IT services, indicators and their values) together with its alignment with the DOLCE-UltraLite foundational ontology.
Landercasper, Jeffrey; Fayanju, Oluwadamilola M; Bailey, Lisa; Berry, Tiffany S; Borgert, Andrew J; Buras, Robert; Chen, Steven L; Degnim, Amy C; Froman, Joshua; Gass, Jennifer; Greenberg, Caprice; Mautner, Starr Koslow; Krontiras, Helen; Ramirez, Luis D; Sowden, Michelle; Wexelman, Barbara; Wilke, Lee; Rao, Roshni
2018-02-01
Nine breast cancer quality measures (QM) were selected by the American Society of Breast Surgeons (ASBrS) for the Centers for Medicare and Medicaid Services (CMS) Quality Payment Programs (QPP) and other performance improvement programs. We report member performance. Surgeons entered QM data into an electronic registry. For each QM, aggregate "performance met" (PM) was reported (median, range and percentiles) and benchmarks (target goals) were calculated by CMS methodology, specifically, the Achievable Benchmark of Care™ (ABC) method. A total of 1,286,011 QM encounters were captured from 2011-2015. For 7 QM, first and last PM rates were as follows: (1) needle biopsy (95.8, 98.5%), (2) specimen imaging (97.9, 98.8%), (3) specimen orientation (98.5, 98.3%), (4) sentinel node use (95.1, 93.4%), (5) antibiotic selection (98.0, 99.4%), (6) antibiotic duration (99.0, 99.8%), and (7) no surgical site infection (98.8, 98.9%); all p values < 0.001 for trends. Variability and reasons for noncompliance by surgeon for each QM were identified. The CMS-calculated target goals (ABC™ benchmarks) for PM for 6 QM were 100%, suggesting that not meeting performance is a "never should occur" event. Surgeons self-reported a large number of specialty-specific patient-measure encounters into a registry for self-assessment and participation in QPP. Despite high levels of performance demonstrated initially in 2011 with minimal subsequent change, the ASBrS concluded "perfect" performance was not a realistic goal for QPP. Thus, after review of our normative performance data, the ASBrS recommended different benchmarks than CMS for each QM.
Development and Evaluation of a Success Index for Professionals in Postgraduate Training Programs
1993-02-26
15 Predicting Success among Program Participants .... ......... .. 16 AEGD Success and Career Success .......... ................ .. 16...10), and general career success (8). Hough applied the principle of behavioral consistency and aspects of the biographical inventory to develop and...the opportunity to evaluate how measures of success in AEGD translate into career success . The 90 AERs were reviewed by two experienced senior dental
Droplet Sizing Research Program.
1986-03-10
of size and velocity distributions is needed. For example, fuel spray studies, aer- osol studies, flue gas desulfurization , spray drying, paint...techniques are presented chronologic- ally since there is a logical development as a function of time. Most of the significant technical accomplishments...U3U 0 0 ILI N signals with an apparently different size by using the following logic : droplets that produce a certain visibility are associated with a
2009-02-19
component usage. PH 13-8Mo is a precipitation-hardenable martensitic stainless steel combining excellent corrosion resistance with strength. Custom 465 is...a martensitic , age-hardenable stainless steel capable of about 1,724 MPa (250 ksi) UTS when peak-aged (H900 condition). Especially, this steel can...NOTES 14. ABSTRACT Five high strength steels (4340, 300M, AerMet 100, Ferrium S53, and Hy-Tuf) and four stainless steels (High Nitrogen, 13
Brandenburg, Marcus; Hahn, Gerd J
2018-06-01
Process industries typically involve complex manufacturing operations and thus require adequate decision support for aggregate production planning (APP). The need for powerful and efficient approaches to solve complex APP problems persists. Problem-specific solution approaches are advantageous compared to standardized approaches that are designed to provide basic decision support for a broad range of planning problems but inadequate to optimize under consideration of specific settings. This in turn calls for methods to compare different approaches regarding their computational performance and solution quality. In this paper, we present a benchmarking problem for APP in the chemical process industry. The presented problem focuses on (i) sustainable operations planning involving multiple alternative production modes/routings with specific production-related carbon emission and the social dimension of varying operating rates and (ii) integrated campaign planning with production mix/volume on the operational level. The mutual trade-offs between economic, environmental and social factors can be considered as externalized factors (production-related carbon emission and overtime working hours) as well as internalized ones (resulting costs). We provide data for all problem parameters in addition to a detailed verbal problem statement. We refer to Hahn and Brandenburg [1] for a first numerical analysis based on and for future research perspectives arising from this benchmarking problem.
Marshall, Margaret A.
2014-11-04
In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an effort to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with themore » GODIVA I experiments. Additionally, various material reactivity worths, the surface material worth coefficient, the delayed neutron fraction, the prompt neutron decay constant, relative fission density, and relative neutron importance were all measured. The critical assembly, material reactivity worths, the surface material worth coefficient, and the delayed neutron fraction were all evaluated as benchmark experiment measurements. The reactor physics measurements are the focus of this paper; although for clarity the critical assembly benchmark specifications are briefly discussed.« less
Contributions to Integral Nuclear Data in ICSBEP and IRPhEP since ND 2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Briggs, J. Blair; Gulliford, Jim
2016-09-01
The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the international nuclear data community at ND2013. Since ND2013, integral benchmark data that are available for nuclear data testing has continued to increase. The status of the international benchmark efforts and the latest contributions to integral nuclear data for testing is discussed. Select benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2013 are highlighted. The 2015 edition of the ICSBEP Handbook now contains 567 evaluations with benchmark specifications for 4,874more » critical, near-critical, or subcritical configurations, 31 criticality alarm placement/shielding configuration with multiple dose points apiece, and 207 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. The 2015 edition of the IRPhEP Handbook contains data from 143 different experimental series that were performed at 50 different nuclear facilities. Currently 139 of the 143 evaluations are published as approved benchmarks with the remaining four evaluations published in draft format only. Measurements found in the IRPhEP Handbook include criticality, buckling and extrapolation length, spectral characteristics, reactivity effects, reactivity coefficients, kinetics, reaction-rate distributions, power distributions, isotopic compositions, and/or other miscellaneous types of measurements for various types of reactor systems. Annual technical review meetings for both projects were held in April 2016; additional approved benchmark evaluations will be included in the 2016 editions of these handbooks.« less
Benchmarking routine psychological services: a discussion of challenges and methods.
Delgadillo, Jaime; McMillan, Dean; Leach, Chris; Lucock, Mike; Gilbody, Simon; Wood, Nick
2014-01-01
Policy developments in recent years have led to important changes in the level of access to evidence-based psychological treatments. Several methods have been used to investigate the effectiveness of these treatments in routine care, with different approaches to outcome definition and data analysis. To present a review of challenges and methods for the evaluation of evidence-based treatments delivered in routine mental healthcare. This is followed by a case example of a benchmarking method applied in primary care. High, average and poor performance benchmarks were calculated through a meta-analysis of published data from services working under the Improving Access to Psychological Therapies (IAPT) Programme in England. Pre-post treatment effect sizes (ES) and confidence intervals were estimated to illustrate a benchmarking method enabling services to evaluate routine clinical outcomes. High, average and poor performance ES for routine IAPT services were estimated to be 0.91, 0.73 and 0.46 for depression (using PHQ-9) and 1.02, 0.78 and 0.52 for anxiety (using GAD-7). Data from one specific IAPT service exemplify how to evaluate and contextualize routine clinical performance against these benchmarks. The main contribution of this report is to summarize key recommendations for the selection of an adequate set of psychometric measures, the operational definition of outcomes, and the statistical evaluation of clinical performance. A benchmarking method is also presented, which may enable a robust evaluation of clinical performance against national benchmarks. Some limitations concerned significant heterogeneity among data sources, and wide variations in ES and data completeness.
Medicare Part D Roulette: Potential Implications of Random Assignment and Plan Restrictions
Patel, Rajul A.; Walberg, Mark P.; Woelfel, Joseph A.; Amaral, Michelle M.; Varu, Paresh
2013-01-01
Background Dual-eligible (Medicare/Medicaid) beneficiaries are randomly assigned to a benchmark plan, which provides prescription drug coverage under the Part D benefit without consideration of their prescription drug profile. To date, the potential for beneficiary assignment to a plan with poor formulary coverage has been minimally studied and the resultant financial impact to beneficiaries unknown. Objective We sought to determine cost variability and drug use restrictions under each available 2010 California benchmark plan. Methods Dual-eligible beneficiaries were provided Part D plan assistance during the 2010 annual election period. The Medicare Web site was used to determine benchmark plan costs and prescription utilization restrictions for each of the six California benchmark plans available for random assignment in 2010. A standardized survey was used to record all de-identified beneficiary demographic and plan specific data. For each low-income subsidy-recipient (n = 113), cost, rank, number of non-formulary medications, and prescription utilization restrictions were recorded for each available 2010 California benchmark plan. Formulary matching rates (percent of beneficiary's medications on plan formulary) were calculated for each benchmark plan. Results Auto-assigned beneficiaries had only a 34% chance of being assigned to the lowest cost plan; the remainder faced potentially significant avoidable out-of-pocket costs. Wide variations between benchmark plans were observed for plan cost, formulary coverage, formulary matching rates, and prescription utilization restrictions. Conclusions Beneficiaries had a 66% chance of being assigned to a sub-optimal plan; thereby, they faced significant avoidable out-of-pocket costs. Alternative methods of beneficiary assignment could decrease beneficiary and Medicare costs while also reducing medication non-compliance. PMID:24753963
Detection of the aerolysin gene in Aeromonas hydrophila by the polymerase chain reaction.
Pollard, D R; Johnson, W M; Lior, H; Tyler, S D; Rozee, K R
1990-01-01
Synthetic oligonucleotide primers were used in a polymerase chain reaction (PCR) technique to detect the gene for aerolysin in strains of Aeromonas hydrophila and to screen for identical genes in A. caviae, A. sobria, and A. veronii isolated from patients with diarrheal disease. Primers targeted a 209-bp fragment of the aer gene coding for the beta-hemolysin and detected template DNA only in the PCR using nucleic acid (NA) from hemolytic strains of A. hydrophila which were also cytotoxic to Vero and CHO cells and enterotoxic in suckling-mouse assays. PCR amplification of NA from hemolytic A. sobria or nonhemolytic A. hydrophila and A. caviae strains was consistently negative. Primer specificity was determined in the PCR by using NA extracted from 56 strains of bacteria, including hemolytic Escherichia coli and Listeria monocytogenes as well as several recognized enteric pathogens defined in terms of their toxigenicity. The detection limit for the aerolysin gene by PCR amplification was 1 ng of total NA. The PCR clearly identified aerolysin-producing strains of A. hydrophila and may have application as a species-specific virulence test because other hemolytic Aeromonas species tested were negative. Images PMID:2254423
Journal of Engineering Thermophysics (Selected Articles),
1986-06-24
References ŕ 3ao Yaolin, "The phenomenon of high temperature gasp vibration ir a compressor", progress report of Shenyang Aeroengine Company, iJZ1281, 1980...could be included objectively in the design. I. Introduction For the aeroengine design of this date, the part span shroud usually is added on the...Shenyang Aeroenging Resesarch Institute, 1978. 17 er ,. nz , .Mu .! ,;- A : ., - .. .. 3 . . -.. Shenvan Aer .engine a.. Li Keming Shenyang Aeroengine
Models for predicting the ratio of particulate pollutant concentrations inside vehicles to roadways
Hudda, N.; Fruin, S. A.
2013-01-01
Under closed-window driving conditions, the in-vehicle-to-outside (I/O) concentration ratio for traffic-related particulate pollutants ranges from nearly zero to one, and varies up to five-fold across a fleet of vehicles, thus strongly affecting occupant exposures. Concentrations of five particulate pollutants (particle-bound polycyclic aromatic hydrocarbons, black carbon, ultrafine particle number, and fine and coarse particulate mass) were measured simultaneously while systematically varying key influential parameters (i.e., vehicle type, ventilation, and speed). The I/O ratios for these pollutants were primarily determined by vehicle air exchange rate (AER), AER being mostly a function of ventilation setting (recirculation or outside air), vehicle characteristics (e.g., age, interior volume) and driving speed. Small (±0.15) but measurable differences in I/O ratios between pollutants were observed although ratios were highly correlated. This allowed us to build on previous studies of ultrafine particle number I/O ratios to develop predictive models for other particulate pollutants. These models explained over 60% of measured variation, using ventilation setting, driving speed, and easily-obtained vehicle characteristics as predictors. Our results suggest that I/O ratios for different particulate pollutants need not necessarily be measured individually and that exposure to all particulate pollutants may be reduced significantly by simple ventilation choices. PMID:23957386
Martin, S T; Cardwell, S M; Nailor, M D; Gabardi, S
2014-04-01
Use of rituximab, a chimeric monoclonal antibody directed at the CD20 antigen, continues to increase in solid organ transplantation (SOT) for several off-label uses. In September 2013, the United States Food and Drug Administration (FDA) issued a Drug Safety Communication to oncology, rheumatology and pharmacy communities outlining a new Boxed Warning for rituximab. Citing 109 cases of fatal hepatitis B virus (HBV) reactivation in persons receiving rituximab therapy with previous or chronic HBV infection documented in their Adverse Event Reporting System (AERS), the FDA recommends screening for HBV serologies in all patients planned to receive rituximab and antiviral prophylaxis in any patient with a positive history of HBV infection. There is a lack of data pertaining to this topic in the SOT population despite an increase in off-label indications. Previous reports suggest patients receiving rituximab, on average, were administered six doses prior to HBV reactivation. Recommendations on prophylaxis, treatment and re-challenging patients with therapy after resolution of reactivation remain unclear. Based on data from the FDA AERS and multiple analyses in oncology, SOT providers utilizing rituximab should adhere to the FDA warnings and recommendations regarding HBV reactivation until further data are available in the SOT population. © Copyright 2014 The American Society of Transplantation and the American Society of Transplant Surgeons.
Impact of starvation on survival, meat condition and metabolism of Chlamys farreri
NASA Astrophysics Data System (ADS)
Yang, Hong-Sheng; Wang, Jian; Zhou, Yi; Wang, Ping; He, Yi-Chao; Zhang, Fu-Sui
2001-03-01
The effects of 60-day starvation on survival rate, condition index (CI), changes of nutrient composition of different tissues, respiration and excretion of scallop Chlamys farreri were studied in laboratory from Oct. 17 to Dec. 15, 1997. Two groups (control and starvation with 200 individuals each) were cultured in two 2 m3 tanks, with 31 to 32 salinity water at 17°C. Starvation effects were measured after 10, 20, 40 and 60 days. There was no mass mortality of scallops of the two tanks and survival rates of the control and starvation groups were 93.5% and 92.0%, respectively. Starvation had strong effect on the meat condition of the scallops, especially after 10 days; when relative lipid percentage dropped sharply while relative protein percentage increased. The impact of starvation on the oxygen consumption rate (OCR) and the ammonia-N excretion rate (AER) was obvious. The OCR increased rapidly after 10 days but decreased after 20 days. The AER increased after 10 days and 20 days, but decreased obviously from 20 to 40 days. The O∶N ratios varied to different degrees, and minimized after 20 days. The low O∶N ratios implied that the protein was the main material for the metabolism of C. farreri.
Law, Lee Yong; Lin, Jun Sheng; Becker, David L; Green, Colin R
2002-12-01
In the developing chick wing, the use of antisense oligodeoxynucleotides to transiently knock down the expression of the gap junction protein, connexin43 (Cx43), results in limb patterning defects, including deletion of the anterior digits. To understand more about how such defects arise, the effects of transient Cx43 knockdown on the expression patterns of several genes known to play pivotal roles in limb formation were examined. Sonic hedgehog (Shh), which is normally expressed in the zone of polarizing activity (ZPA) and is required to maintain both the ZPA and the apical ectodermal ridge (AER), was found to be downregulated in treated limbs within 30 h. Bone morphogenetic protein-2 (Bmp-2), a gene downstream of Shh, was similarly downregulated. Fibroblast growth factor-8 expression, however, was unaltered 30 h after treatment but was greatly reduced at 48 h post-treatment, when the AER begins to regress. Expressions of Bmp-4 and Muscle segment homeobox-like gene (Msx-1) were not affected at any of the time points examined. Cx43 expression is therefore involved in some, but not all patterning cascades, and appears to play a role in the regulation of ZPA activity.
Escobar, Gabriel J; Baker, Jennifer M; Turk, Benjamin J; Draper, David; Liu, Vincent; Kipnis, Patricia
2017-01-01
This article is not a traditional research report. It describes how conducting a specific set of benchmarking analyses led us to broader reflections on hospital benchmarking. We reexamined an issue that has received far less attention from researchers than in the past: How variations in the hospital admission threshold might affect hospital rankings. Considering this threshold made us reconsider what benchmarking is and what future benchmarking studies might be like. Although we recognize that some of our assertions are speculative, they are based on our reading of the literature and previous and ongoing data analyses being conducted in our research unit. We describe the benchmarking analyses that led to these reflections. The Centers for Medicare and Medicaid Services' Hospital Compare Web site includes data on fee-for-service Medicare beneficiaries but does not control for severity of illness, which requires physiologic data now available in most electronic medical records.To address this limitation, we compared hospital processes and outcomes among Kaiser Permanente Northern California's (KPNC) Medicare Advantage beneficiaries and non-KPNC California Medicare beneficiaries between 2009 and 2010. We assigned a simulated severity of illness measure to each record and explored the effect of having the additional information on outcomes. We found that if the admission severity of illness in non-KPNC hospitals increased, KPNC hospitals' mortality performance would appear worse; conversely, if admission severity at non-KPNC hospitals' decreased, KPNC hospitals' performance would appear better. Future hospital benchmarking should consider the impact of variation in admission thresholds.
NAS Parallel Benchmark Results 11-96. 1.0
NASA Technical Reports Server (NTRS)
Bailey, David H.; Bailey, David; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The NAS Parallel Benchmarks have been developed at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion. In other words, the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. These results represent the best results that have been reported to us by the vendors for the specific 3 systems listed. In this report, we present new NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, SGI Origin200, and SGI Origin2000. We also report High Performance Fortran (HPF) based NPB results for IBM SP2 Wide Nodes, HP/Convex Exemplar SPP2000, and SGI/CRAY T3D. These results have been submitted by Applied Parallel Research (APR) and Portland Group Inc. (PGI). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks.
Benchmarking Big Data Systems and the BigData Top100 List.
Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann
2013-03-01
"Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.
Cereda, Carlo W; Christensen, Søren; Campbell, Bruce Cv; Mishra, Nishant K; Mlynash, Michael; Levi, Christopher; Straka, Matus; Wintermark, Max; Bammer, Roland; Albers, Gregory W; Parsons, Mark W; Lansberg, Maarten G
2016-10-01
Differences in research methodology have hampered the optimization of Computer Tomography Perfusion (CTP) for identification of the ischemic core. We aim to optimize CTP core identification using a novel benchmarking tool. The benchmarking tool consists of an imaging library and a statistical analysis algorithm to evaluate the performance of CTP. The tool was used to optimize and evaluate an in-house developed CTP-software algorithm. Imaging data of 103 acute stroke patients were included in the benchmarking tool. Median time from stroke onset to CT was 185 min (IQR 180-238), and the median time between completion of CT and start of MRI was 36 min (IQR 25-79). Volumetric accuracy of the CTP-ROIs was optimal at an rCBF threshold of <38%; at this threshold, the mean difference was 0.3 ml (SD 19.8 ml), the mean absolute difference was 14.3 (SD 13.7) ml, and CTP was 67% sensitive and 87% specific for identification of DWI positive tissue voxels. The benchmarking tool can play an important role in optimizing CTP software as it provides investigators with a novel method to directly compare the performance of alternative CTP software packages. © The Author(s) 2015.
2015-01-01
Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the “artificial enrichment” and “analogue bias” of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD. PMID:24749745
Xia, Jie; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren; Wang, Xiang Simon
2014-05-27
Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the "artificial enrichment" and "analogue bias" of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD.
Kamel Boulos, M N; Roudsari, A V; Gordon, C; Muir Gray, J A
2001-01-01
In 1998, the U.K. National Health Service Information for Health Strategy proposed the implementation of a National electronic Library for Health to provide clinicians, healthcare managers and planners, patients and the public with easy, round the clock access to high quality, up-to-date electronic information on health and healthcare. The Virtual Branch Libraries are among the most important components of the National electronic Library for Health. They aim at creating online knowledge based communities, each concerned with some specific clinical and other health-related topics. This study is about the envisaged Dermatology Virtual Branch Libraries of the National electronic Library for Health. It aims at selecting suitable dermatology Web resources for inclusion in the forthcoming Virtual Branch Libraries after establishing preliminary quality benchmarking rules for this task. Psoriasis, being a common dermatological condition, has been chosen as a starting point. Because quality is a principal concern of the National electronic Library for Health, the study includes a review of the major quality benchmarking systems available today for assessing health-related Web sites. The methodology of developing a quality benchmarking system has been also reviewed. Aided by metasearch Web tools, candidate resources were hand-selected in light of the reviewed benchmarking systems and specific criteria set by the authors. Over 90 professional and patient-oriented Web resources on psoriasis and dermatology in general are suggested for inclusion in the forthcoming Dermatology Virtual Branch Libraries. The idea of an all-in knowledge-hallmarking instrument for the National electronic Library for Health is also proposed based on the reviewed quality benchmarking systems. Skilled, methodical, organized human reviewing, selection and filtering based on well-defined quality appraisal criteria seems likely to be the key ingredient in the envisaged National electronic Library for Health service. Furthermore, by promoting the application of agreed quality guidelines and codes of ethics by all health information providers and not just within the National electronic Library for Health, the overall quality of the Web will improve with time and the Web will ultimately become a reliable and integral part of the care space.
A proposed benchmark problem for cargo nuclear threat monitoring
NASA Astrophysics Data System (ADS)
Wesley Holmes, Thomas; Calderon, Adan; Peeples, Cody R.; Gardner, Robin P.
2011-10-01
There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, [1]). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.×4 in.×16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.×16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.
Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu
2016-01-01
A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.
Successful hydraulic strategies to start up OLAND sequencing batch reactors at lab scale
Schaubroeck, Thomas; Bagchi, Samik; De Clippeleir, Haydée; Carballa, Marta; Verstraete, Willy; Vlaeminck, Siegfried E.
2012-01-01
Summary Oxygen‐limited autotrophic nitrification/denitrification (OLAND) is a one‐stage combination of partial nitritation and anammox, which can have a challenging process start‐up. In this study, start‐up strategies were tested for sequencing batch reactors (SBR), varying hydraulic parameters, i.e. volumetric exchange ratio (VER) and feeding regime, and salinity. Two sequential tests with two parallel SBR were performed, and stable removal rates > 0.4 g N l−1 day−1 with minimal nitrite and nitrate accumulation were considered a successful start‐up. SBR A and B were operated at 50% VER with 3 g NaCl l−1 in the influent, and the influent was fed over 8% and 82% of the cycle time respectively. SBR B started up in 24 days, but SBR A achieved no start‐up in 39 days. SBR C and D were fed over 65% of the cycle time at 25% VER, and salt was added only to the influent of SBR D (5 g NaCl l−1). Start‐up of both SBR C and D was successful in 9 and 32 days respectively. Reactor D developed a higher proportion of small aggregates (0.10–0.25 mm), with a high nitritation to anammox rate ratio, likely the cause of the observed nitrite accumulation. The latter was overcome by temporarily including an anoxic period at the end of the reaction phase. All systems achieved granulation and similar biomass‐specific nitrogen removal rates (141–220 mg N g−1 VSS day−1). FISH revealed a close juxtapositioning of aerobic and anoxic ammonium‐oxidizing bacteria (AerAOB and AnAOB), also in small aggregates. DGGE showed that AerAOB communities had a lower evenness than Planctomycetes communities. A higher richness of the latter seemed to be correlated with better reactor performance. Overall, the fast start‐up of SBR B, C and D suggests that stable hydraulic conditions are beneficial for OLAND while increased salinity at the tested levels is not needed for good reactor performance. PMID:22236147
Successful hydraulic strategies to start up OLAND sequencing batch reactors at lab scale.
Schaubroeck, Thomas; Bagchi, Samik; De Clippeleir, Haydée; Carballa, Marta; Verstraete, Willy; Vlaeminck, Siegfried E
2012-05-01
Oxygen-limited autotrophic nitrification/denitrification (OLAND) is a one-stage combination of partial nitritation and anammox, which can have a challenging process start-up. In this study, start-up strategies were tested for sequencing batch reactors (SBR), varying hydraulic parameters, i.e. volumetric exchange ratio (VER) and feeding regime, and salinity. Two sequential tests with two parallel SBR were performed, and stable removal rates > 0.4 g N l(-1) day(-1) with minimal nitrite and nitrate accumulation were considered a successful start-up. SBR A and B were operated at 50% VER with 3 g NaCl l(-1) in the influent, and the influent was fed over 8% and 82% of the cycle time respectively. SBR B started up in 24 days, but SBR A achieved no start-up in 39 days. SBR C and D were fed over 65% of the cycle time at 25% VER, and salt was added only to the influent of SBR D (5 g NaCl l(-1)). Start-up of both SBR C and D was successful in 9 and 32 days respectively. Reactor D developed a higher proportion of small aggregates (0.10-0.25 mm), with a high nitritation to anammox rate ratio, likely the cause of the observed nitrite accumulation. The latter was overcome by temporarily including an anoxic period at the end of the reaction phase. All systems achieved granulation and similar biomass-specific nitrogen removal rates (141-220 mg N g(-1) VSS day(-1)). FISH revealed a close juxtapositioning of aerobic and anoxic ammonium-oxidizing bacteria (AerAOB and AnAOB), also in small aggregates. DGGE showed that AerAOB communities had a lower evenness than Planctomycetes communities. A higher richness of the latter seemed to be correlated with better reactor performance. Overall, the fast start-up of SBR B, C and D suggests that stable hydraulic conditions are beneficial for OLAND while increased salinity at the tested levels is not needed for good reactor performance. © 2012 The Authors. Microbial Biotechnology © 2012 Society for Applied Microbiology and Blackwell Publishing Ltd.
Samman, Samir; McCarthur, Jennifer O; Peat, Mary
2006-01-01
Benchmarking has been adopted by educational institutions as a potentially sensitive tool for improving learning and teaching. To date there has been limited application of benchmarking methodology in the Discipline of Nutritional Science. The aim of this survey was to define core elements and outstanding practice in Nutritional Science through collaborative benchmarking. Questionnaires that aimed to establish proposed core elements for Nutritional Science, and inquired about definitions of " good" and " outstanding" practice were posted to named representatives at eight Australian universities. Seven respondents identified core elements that included knowledge of nutrient metabolism and requirement, food production and processing, modern biomedical techniques that could be applied to understanding nutrition, and social and environmental issues as related to Nutritional Science. Four of the eight institutions who agreed to participate in the present survey identified the integration of teaching with research as an indicator of outstanding practice. Nutritional Science is a rapidly evolving discipline. Further and more comprehensive surveys are required to consolidate and update the definition of the discipline, and to identify the optimal way of teaching it. Global ideas and specific regional requirements also need to be considered.
Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D.; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z.; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J.; Chia, Burton K. H.; Denis, Bertrand; Froula, Jeff L.; Wang, Zhong; Egan, Robert; Kang, Dongwan Don; Cook, Jeffrey J.; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W.; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z.; Cuevas, Daniel A.; Edwards, Robert A.; Saha, Surya; Piro, Vitor C.; Renard, Bernhard Y.; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C.; Woyke, Tanja; Vorholt, Julia A.; Schulze-Lefert, Paul; Rubin, Edward M.; Darling, Aaron E.; Rattei, Thomas; McHardy, Alice C.
2018-01-01
In metagenome analysis, computational methods for assembly, taxonomic profiling and binning are key components facilitating downstream biological data interpretation. However, a lack of consensus about benchmarking datasets and evaluation metrics complicates proper performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on datasets of unprecedented complexity and realism. Benchmark metagenomes were generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids, including genomes with varying degrees of relatedness to each other and to publicly available ones and representing common experimental setups. Across all datasets, assembly and genome binning programs performed well for species represented by individual genomes, while performance was substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below the family level. Parameter settings substantially impacted performances, underscoring the importance of program reproducibility. While highlighting current challenges in computational metagenomics, the CAMI results provide a roadmap for software selection to answer specific research questions. PMID:28967888
First benchmark of the Unstructured Grid Adaptation Working Group
NASA Technical Reports Server (NTRS)
Ibanez, Daniel; Barral, Nicolas; Krakos, Joshua; Loseille, Adrien; Michal, Todd; Park, Mike
2017-01-01
Unstructured grid adaptation is a technology that holds the potential to improve the automation and accuracy of computational fluid dynamics and other computational disciplines. Difficulty producing the highly anisotropic elements necessary for simulation on complex curved geometries that satisfies a resolution request has limited this technology's widespread adoption. The Unstructured Grid Adaptation Working Group is an open gathering of researchers working on adapting simplicial meshes to conform to a metric field. Current members span a wide range of institutions including academia, industry, and national laboratories. The purpose of this group is to create a common basis for understanding and improving mesh adaptation. We present our first major contribution: a common set of benchmark cases, including input meshes and analytic metric specifications, that are publicly available to be used for evaluating any mesh adaptation code. We also present the results of several existing codes on these benchmark cases, to illustrate their utility in identifying key challenges common to all codes and important differences between available codes. Future directions are defined to expand this benchmark to mature the technology necessary to impact practical simulation workflows.
7 CFR 1485.15 - Activity plan.
Code of Federal Regulations, 2010 CFR
2010-01-01
... participant shall develop a specific activity plan(s) based on its strategic plan and the allocation approval... any changes in strategy from the strategic plan; (iii) A budget for each proposed activity, identifying the source of funds; (iv) Specific goals and benchmarks to be used to measure the effectiveness of...
2012-08-01
This proceedings report presents the outcomes from an international workshop designed to establish consensus on: definitions for key performance indicators (KPIs) for oocyte and embryo cryopreservation, using either slow freezing or vitrification; minimum performance level values for each KPI, representing basic competency; and aspirational benchmark values for each KPI, representing best practice goals. This report includes general presentations about current practice and factors for consideration in the development of KPIs. A total of 14 KPIs were recommended and benchmarks for each are presented. No recommendations were made regarding specific cryopreservation techniques or devices, or whether vitrification is 'better' than slow freezing, or vice versa, for any particular stage or application, as this was considered to be outside the scope of this workshop. Copyright © 2012 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
Benchmark Problems for Space Mission Formation Flying
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Leitner, Jesse A.; Folta, David C.; Burns, Richard
2003-01-01
To provide a high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested for space mission formation flying. The problems cover formation flying in low altitude, near-circular Earth orbit, high altitude, highly elliptical Earth orbits, and large amplitude lissajous trajectories about co-linear libration points of the Sun-Earth/Moon system. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions that are of interest to various agencies.
Automatic Thread-Level Parallelization in the Chombo AMR Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christen, Matthias; Keen, Noel; Ligocki, Terry
2011-05-26
The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less
A web-based system architecture for ontology-based data integration in the domain of IT benchmarking
NASA Astrophysics Data System (ADS)
Pfaff, Matthias; Krcmar, Helmut
2018-03-01
In the domain of IT benchmarking (ITBM), a variety of data and information are collected. Although these data serve as the basis for business analyses, no unified semantic representation of such data yet exists. Consequently, data analysis across different distributed data sets and different benchmarks is almost impossible. This paper presents a system architecture and prototypical implementation for an integrated data management of distributed databases based on a domain-specific ontology. To preserve the semantic meaning of the data, the ITBM ontology is linked to data sources and functions as the central concept for database access. Thus, additional databases can be integrated by linking them to this domain-specific ontology and are directly available for further business analyses. Moreover, the web-based system supports the process of mapping ontology concepts to external databases by introducing a semi-automatic mapping recommender and by visualizing possible mapping candidates. The system also provides a natural language interface to easily query linked databases. The expected result of this ontology-based approach of knowledge representation and data access is an increase in knowledge and data sharing in this domain, which will enhance existing business analysis methods.
2013-01-08
This re- search ignores effects on long-term durability, trafficability, temperature rebar corrosion , and other concerns that are of minimal... concrete because it can cause corrosion of steel reinforcement. However, the corrosion problem develops slowly with time; therefore, this problem has a...ER D C/ CR RE L TR -1 3- 1 Laboratory Evaluation of Expedient Low- Temperature Concrete Admixtures for Repairing Blast Holes in Cold
Iron-Associated Outer Membrane Proteins of Magnetic Bacteria
1989-06-16
contributed partial support for the training of four Ph. D. graduate students: Lawrence C. Paoletti - male , caucasion Kevin A. Short - male, caucasion Yuri...weight standards 113io-Rad Laboratories. Richmond. -Corresponding author Calif.) wAere solubilized and separated by the ciectrophoretic -4 PAOLErTt...Selectively Releases Periplasmic Proteins LAWRFNCE C P\\OLETTI. KEVIN A. SHORT. NANCY BLAKEMORE. \\’,o RICHARD P. BLAKEM()RE" l)l.’m o ,It WI, roh/Ol "IN
Pathophysiology and Toxicokinetic Studies of Blue-Green Algae Intoxication in the Swine Model
1987-08-31
bovine aer, albamin caused similar reductions !u toxicity, suggesting nonspecific protein binding. Saturation at the N-methyldehydroalanine of MCYST-A...need to conserve toxin and the goal of creating a model for naturally occurring toxicosis caused us to evaluate the oral administration of microcystin...the isolated ileal loop preparation. Pancreatic enzymes and protein-binding were examined as 2 possible causes for the low oral toxicity of MCYST-A
Breast Cancer Research Training Grant
1997-10-01
students and answer questions and plan each student’s curriculum. Trainees are encouraged to consult any of the participating faculty for general advice...ROOM L301, REFRESHMENTS AT 1:00 September 20 Alexander Urbano Identification of novel endonucleases in drug- Department of Pathology & Laboratory...highly relevant. Plan to attend if you can. AER/cs Special Seminar 11/7/96 Dr. Harold Varmus •i• i .*: i •i •• • .iiil Boston University School of
1989-08-01
Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17
Photothermal Imaging of Defects in Metals and Ceramics.
1986-10-01
24] G. Busse and A. Rosencwaig, " Subsurface imaging with photoacoustics," Appl. Phys. Lett., Vol. 36, p. 815, 1980. [25] G. S. Cargill, "Electron...and A. Rosencwaig, Subsurface imaging with photoacoustics, Appl. Phys. Lett. 36:815 (1980). 12. G. S. Cargill, Electron-acoustic microscopy, in...1979. different orientations." Harwell AERE Report. RI 1686. Apr. 1985. [35] G. Busse and A. Rosencwaig, " Subsurface imaging with photo- [641 R. J
Stress Corrosion Cracking of High Strength Steels
1995-06-01
R. Brown, J. H. Graves, E. U. Lee, C. E. Neu and J. Kozol, " Corrosion Behavior of High Strength Steels for Aerospace Applications," Proceedings of...h fit Stress Corrosion Cracking of High Strength Steels Eun U. Lee, Henry Sanders and Bhaskar Sarkar Naval Air Warfare Center Aircraft Division...Patuxent River, Maryland 20670 ABSTRACT The stress corrosion cracking (SCC) was investigated for AerMet 100 and 300M steels in four aqueous NaCl
Lapão, Luís Velez
2015-01-01
The article by Catan et al. presents a benchmarking exercise comparing Israel and Portugal on the implementation of Information and Communication Technologies in the healthcare sector. Special attention was given to e-Health and m-Health. The authors collected information via a set of interviews with key stakeholders. They compared two different cultures and societies, which have reached slightly different implementation outcomes. Although the comparison is very enlightening, it is also challenging. Benchmarking exercises present a set of challenges, such as the choice of methodologies and the assessment of the impact on organizational strategy. Precise benchmarking methodology is a valid tool for eliciting information about alternatives for improving health systems. However, many beneficial interventions, which benchmark as effective, fail to translate into meaningful healthcare outcomes across contexts. There is a relationship between results and the innovational and competitive environments. Differences in healthcare governance and financing models are well known; but little is known about their impact on Information and Communication Technology implementation. The article by Catan et al. provides interesting clues about this issue. Public systems (such as those of Portugal, UK, Sweden, Spain, etc.) present specific advantages and disadvantages concerning Information and Communication Technology development and implementation. Meanwhile, private systems based fundamentally on insurance packages, (such as Israel, Germany, Netherlands or USA) present a different set of advantages and disadvantages - especially a more open context for innovation. Challenging issues from both the Portuguese and Israeli cases will be addressed. Clearly, more research is needed on both benchmarking methodologies and on ICT implementation strategies.
Vaccari, M; Foladori, P; Nembrini, S; Vitali, F
2018-05-01
One of the largest surveys in Europe about energy consumption in Italian wastewater treatment plants (WWTPs) is presented, based on 241 WWTPs and a total population equivalent (PE) of more than 9,000,000 PE. The study contributes towards standardised resilient data and benchmarking and to identify potentials for energy savings. In the energy benchmark, three indicators were used: specific energy consumption expressed per population equivalents (kWh PE -1 year -1 ), per cubic meter (kWh/m 3 ), and per unit of chemical oxygen demand (COD) removed (kWh/kgCOD). The indicator kWh/m 3 , even though widely applied, resulted in a biased benchmark, because highly influenced by stormwater and infiltrations. Plants with combined networks (often used in Europe) showed an apparent better energy performance. Conversely, the indicator kWh PE -1 year -1 resulted in a more meaningful definition of a benchmark. High energy efficiency was associated with: (i) large capacity of the plant, (ii) higher COD concentration in wastewater, (iii) separate sewer systems, (iv) capacity utilisation over 80%, and (v) high organic loads, but without overloading. The 25th percentile was proposed as a benchmark for four size classes: 23 kWh PE -1 y -1 for large plants > 100,000 PE; 42 kWh PE -1 y -1 for capacity 10,000 < PE < 100,000, 48 kWh PE -1 y -1 for capacity 2,000 < PE < 10,000 and 76 kWh PE -1 y -1 for small plants < 2,000 PE.
Escobar, Gabriel J; Baker, Jennifer M; Turk, Benjamin J; Draper, David; Liu, Vincent; Kipnis, Patricia
2017-01-01
Introduction This article is not a traditional research report. It describes how conducting a specific set of benchmarking analyses led us to broader reflections on hospital benchmarking. We reexamined an issue that has received far less attention from researchers than in the past: How variations in the hospital admission threshold might affect hospital rankings. Considering this threshold made us reconsider what benchmarking is and what future benchmarking studies might be like. Although we recognize that some of our assertions are speculative, they are based on our reading of the literature and previous and ongoing data analyses being conducted in our research unit. We describe the benchmarking analyses that led to these reflections. Objectives The Centers for Medicare and Medicaid Services’ Hospital Compare Web site includes data on fee-for-service Medicare beneficiaries but does not control for severity of illness, which requires physiologic data now available in most electronic medical records. To address this limitation, we compared hospital processes and outcomes among Kaiser Permanente Northern California’s (KPNC) Medicare Advantage beneficiaries and non-KPNC California Medicare beneficiaries between 2009 and 2010. Methods We assigned a simulated severity of illness measure to each record and explored the effect of having the additional information on outcomes. Results We found that if the admission severity of illness in non-KPNC hospitals increased, KPNC hospitals’ mortality performance would appear worse; conversely, if admission severity at non-KPNC hospitals’ decreased, KPNC hospitals’ performance would appear better. Conclusion Future hospital benchmarking should consider the impact of variation in admission thresholds. PMID:29035176
Damorim, Igor Rodrigues; Santos, Tony Meireles; Barros, Gustavo Willames Pimentel; Carvalho, Paulo Roberto Cavalcanti
2017-04-01
Resistance and aerobic training are recommended as an adjunctive treatment for hypertension. However, the number of sessions required until the hypotensive effect of the exercise has stabilized has not been clearly established. To establish the adaptive kinetics of the blood pressure (BP) responses as a function of time and type of training in hypertensive patients. We recruited 69 patients with a mean age of 63.4 ± 2.1 years, randomized into one group of resistance training (n = 32) and another of aerobic training (n = 32). Anthropometric measurements were obtained, and one repetition maximum (1RM) testing was performed. BP was measured before each training session with a digital BP arm monitor. The 50 training sessions were categorized into quintiles. To compare the effect of BP reduction with both training methods, we used two-way analysis of covariance (ANCOVA) adjusted for the BP values obtained before the interventions. The differences between the moments were established by one-way analysis of variance (ANOVA). The reductions in systolic (SBP) and diastolic BP (DBP) were 6.9 mmHg and 5.3 mmHg, respectively, with resistance training and 16.5 mmHg and 11.6 mmHg, respectively, with aerobic training. The kinetics of the hypotensive response of the SBP showed significant reductions until the 20th session in both groups. Stabilization of the DBP occurred in the 20th session of resistance training and in the 10th session of aerobic training. A total of 20 sessions of resistance or aerobic training are required to achieve the maximum benefits of BP reduction. The methods investigated yielded distinct adaptive kinetic patterns along the 50 sessions. Os treinamentos de força e aeróbio são indicados para o tratamento adjuvante da hipertensão. Entretanto, o número de sessões necessárias até estabilização do efeito hipotensor com o exercício ainda não está claramente estabelecido. Estabelecer a cinética adaptativa das respostas tensionais em função do tempo e do tipo de treinamento em hipertensos. Foram recrutados 69 hipertensos com idade média de 63,4 ± 2,1 anos, randomizados em um grupo de treinamento de força (n = 32) e outro de treinamento aeróbio (n = 32). Foram realizadas medidas antropométricas e testes de uma repetição máxima (1RM). A pressão arterial (PA) foi medida antes de cada sessão de treinamento com um aparelho de pressão digital de braço. As 50 sessões de treinamento foram categorizadas em quintis. Para comparar o efeito da redução da PA entre os métodos de treinamentos (between), utilizamos análise de covariância (ANCOVA) bifatorial ajustada para os valores de PA pré-intervenção. As diferenças entre os momentos foram estabelecidas por análise de variância (ANOVA) unifatorial. As reduções na PA sistólica (PAS) e diastólica (PAD) foram de 6,9 mmHg e 5,3 mmHg, respectivamente, com o treinamento de força e 16,5 mmHg e 11,6 mmHg, respectivamente, com o treinamento aeróbio. A cinética hipotensiva da PAS apresentou reduções significativas até a 20ª sessão em ambos os grupos. Observou-se estabilização da PAD na 20ª sessão com o treinamento de força e na 10ª sessão com o aeróbio. São necessárias 20 sessões de treinamento de força ou aeróbio para alcance dos benefícios máximos de redução da PA. Os métodos investigados proporcionaram padrões cinéticos adaptativos distintos ao longo das 50 sessões.
ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers.
Teodoro, Douglas; Sundvall, Erik; João Junior, Mario; Ruch, Patrick; Miranda Freire, Sergio
2018-01-01
The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR) systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS) containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms.
ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers
Sundvall, Erik; João Junior, Mario; Ruch, Patrick; Miranda Freire, Sergio
2018-01-01
The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR) systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS) containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms. PMID:29293556
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mkhabela, P.; Han, J.; Tyobeka, B.
2006-07-01
The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less
Nobels, Frank; Debacker, Noëmi; Brotons, Carlos; Elisaf, Moses; Hermans, Michel P; Michel, Georges; Muls, Erik
2011-09-22
To investigate the effect of physician- and patient-specific feedback with benchmarking on the quality of care in adults with type 2 diabetes mellitus (T2DM). Study centres in six European countries were randomised to either a benchmarking or control group. Physicians in both groups received feedback on modifiable outcome indicators (glycated haemoglobin [HbA1c], glycaemia, total cholesterol, high density lipoprotein-cholesterol, low density lipoprotein [LDL]-cholesterol and triglycerides) for each patient at 0, 4, 8 and 12 months, based on the four times yearly control visits recommended by international guidelines. The benchmarking group also received comparative results on three critical quality indicators of vascular risk (HbA1c, LDL-cholesterol and systolic blood pressure [SBP]), checked against the results of their colleagues from the same country, and versus pre-set targets. After 12 months of follow up, the percentage of patients achieving the pre-determined targets for the three critical quality indicators will be assessed in the two groups. Recruitment was completed in December 2008 with 3994 evaluable patients. This paper discusses the study rationale and design of OPTIMISE, a randomised controlled study, that will help assess whether benchmarking is a useful clinical tool for improving outcomes in T2DM in primary care. NCT00681850.
2011-01-01
Background To investigate the effect of physician- and patient-specific feedback with benchmarking on the quality of care in adults with type 2 diabetes mellitus (T2DM). Methods Study centres in six European countries were randomised to either a benchmarking or control group. Physicians in both groups received feedback on modifiable outcome indicators (glycated haemoglobin [HbA1c], glycaemia, total cholesterol, high density lipoprotein-cholesterol, low density lipoprotein [LDL]-cholesterol and triglycerides) for each patient at 0, 4, 8 and 12 months, based on the four times yearly control visits recommended by international guidelines. The benchmarking group also received comparative results on three critical quality indicators of vascular risk (HbA1c, LDL-cholesterol and systolic blood pressure [SBP]), checked against the results of their colleagues from the same country, and versus pre-set targets. After 12 months of follow up, the percentage of patients achieving the pre-determined targets for the three critical quality indicators will be assessed in the two groups. Results Recruitment was completed in December 2008 with 3994 evaluable patients. Conclusions This paper discusses the study rationale and design of OPTIMISE, a randomised controlled study, that will help assess whether benchmarking is a useful clinical tool for improving outcomes in T2DM in primary care. Trial registration NCT00681850 PMID:21939502
Decontamination of Fast Reactor Hulls and Properties of Immobilised Waste Forms,
1986-10-01
but the results served as a useful guide for a preliminary experimental study of the decontamination of stainless steel hulls (5)using samples from... Study performed under contract No. 312-83-2WAS. UK as part of the Commission of the European Communrities’ research program on ’Radloacti-.? Waste...development and training effort. Marcn 1986 • . . . . . AERE R 11901 ABSTRACT The studies described in this Report have been carried okst on five
1988-01-01
under field conditions. Sampling and analytical laboratory activities were performed by Ecology and Environment, Inc., and California Analytical...the proposed AER3 test conditions. All test samples would be obtained onsite by Ecology and Environment, Inc., of Buffalo, New York, and sent to...ensuring its safe operation. Ecology and Environment performed onsite verification sampling. This activity was coordinated with the Huber project team
Critical Issues in Hydrogen Assisted Cracking of Structural Alloys
2006-01-01
does not precipitate ? Does the HEAC mechanism explain environment-assisted (stress corrosion ) crack growth in high strength alloys stressed in moist...superalloys were cracked in high pressure (100-200 M~a) H2, while maraging and tempered-martensitic steels were cracked in low pressure (-100 kPa) H2...of IRAC in ultra-high strength AerMet®l00 steel demonstrates the role of crack tip stress in promoting H accumulation and embrittlement. The cracking
Dynamic Fracture Simulations of Explosively Loaded Cylinders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Carly W.; Goto, D. M.
2015-11-30
This report documents the modeling results of high explosive experiments investigating dynamic fracture of steel (AerMet® 100 alloy) cylinders. The experiments were conducted at Lawrence Livermore National Laboratory (LLNL) during 2007 to 2008 [10]. A principal objective of this study was to gain an understanding of dynamic material failure through the analysis of hydrodynamic computer code simulations. Two-dimensional and three-dimensional computational cylinder models were analyzed using the ALE3D multi-physics computer code.
Molecular mechanisms underlying the exceptional adaptations of batoid fins.
Nakamura, Tetsuya; Klomp, Jeff; Pieretti, Joyce; Schneider, Igor; Gehrke, Andrew R; Shubin, Neil H
2015-12-29
Extreme novelties in the shape and size of paired fins are exemplified by extinct and extant cartilaginous and bony fishes. Pectoral fins of skates and rays, such as the little skate (Batoid, Leucoraja erinacea), show a strikingly unique morphology where the pectoral fin extends anteriorly to ultimately fuse with the head. This results in a morphology that essentially surrounds the body and is associated with the evolution of novel swimming mechanisms in the group. In an approach that extends from RNA sequencing to in situ hybridization to functional assays, we show that anterior and posterior portions of the pectoral fin have different genetic underpinnings: canonical genes of appendage development control posterior fin development via an apical ectodermal ridge (AER), whereas an alternative Homeobox (Hox)-Fibroblast growth factor (Fgf)-Wingless type MMTV integration site family (Wnt) genetic module in the anterior region creates an AER-like structure that drives anterior fin expansion. Finally, we show that GLI family zinc finger 3 (Gli3), which is an anterior repressor of tetrapod digits, is expressed in the posterior half of the pectoral fin of skate, shark, and zebrafish but in the anterior side of the pelvic fin. Taken together, these data point to both highly derived and deeply ancestral patterns of gene expression in skate pectoral fins, shedding light on the molecular mechanisms behind the evolution of novel fin morphologies.
Bénazet, Jean-Denis; Zeller, Rolf
2009-10-01
A wealth of classical embryological manipulation experiments taking mainly advantage of the chicken limb buds identified the apical ectodermal ridge (AER) and the zone of polarizing activity (ZPA) as the respective ectodermal and mesenchymal key signaling centers coordinating proximodistal (PD) and anteroposterior (AP) limb axis development. These experiments inspired Wolpert's French flag model, which is a classic among morphogen gradient models. Subsequent molecular and genetic analysis in the mouse identified retinoic acid as proximal signal, and fibroblast growth factors (FGFs) and sonic hedgehog (SHH) as the essential instructive signals produced by AER and ZPA, respectively. Recent studies provide good evidence that progenitors are specified early with respect to their PD and AP fates and that morpho-regulatory signaling is also required for subsequent proliferative expansion of the specified progenitor pools. The determination of particular fates seems to occur rather late and depends on additional signals such as bone morphogenetic proteins (BMPs), which indicates that cells integrate signaling inputs over time and space. The coordinate regulation of PD and AP axis patterning is controlled by an epithelial-mesenchymal feedback signaling system, in which transcriptional regulation of the BMP antagonist Gremlin1 integrates inputs from the BMP, SHH, and FGF pathways. Vertebrate limb-bud development is controlled by a 4-dimensional (4D) patterning system integrating positive and negative regulatory feedback loops, rather than thresholds set by morphogen gradients.
Indoor air quality in green vs conventional multifamily low-income housing.
Colton, Meryl D; MacNaughton, Piers; Vallarino, Jose; Kane, John; Bennett-Fripp, Mae; Spengler, John D; Adamkiewicz, Gary
2014-07-15
Indoor air quality is an important predictor of health, especially in low-income populations. It is unclear how recent trends in "green" building affect the indoor exposure profile. In two successive years, we conducted environmental sampling, home inspections, and health questionnaires with families in green and conventional (control) apartments in two public housing developments. A subset of participants was followed as they moved from conventional to green or conventional to conventional housing. We measured particulate matter less than 2.5 μm aerodynamic diameter (PM2.5), formaldehyde, nitrogen dioxide (NO2), nicotine, carbon dioxide (CO2), and air exchange rate (AER) over a seven-day sampling period coincident with survey administration. In multivariate models, we observed 57%, 65%, and 93% lower concentrations of PM2.5, NO2, and nicotine (respectively) in green vs control homes (p=0.032, p<0.001, p=0.003, respectively), as well as fewer reports of mold, pests, inadequate ventilation, and stuffiness. Differences in formaldehyde and CO2 were not statistically significant. AER was marginally lower in green buildings (p=0.109). Participants in green homes experienced 47% fewer sick building syndrome symptoms (p<0.010). We observed significant decreases in multiple indoor exposures and improved health outcomes among participants who moved into green housing, suggesting multilevel housing interventions have the potential to improve long-term resident health.
Cardozo, Flávio Augusto; Gonzalez, Juan Miguel; Feitosa, Valker Araujo; Pessoa, Adalberto; Rivera, Irma Nelly Gutierrez
2017-10-27
N-Acetyl-D-glucosamine (GlcNAc) is a monosaccharide with great application potential in the food, cosmetic, pharmaceutical, and biomaterial areas. GlcNAc is currently produced by chemical hydrolysis of chitin, but the current processes are environmentally unfriendly, have low yield and high cost. This study demonstrates the potential to produce GlcNAc from α-chitin using chitinases of ten marine-derived Aeromonas isolates as a sustainable alternative to the current chemical process. The isolates were characterized as Aeromonas caviae by multilocus sequence analysis (MLSA) using six housekeeping genes (gltA, groL, gyrB, metG, ppsA, and recA), not presented the virulence genes verified (alt, act, ast, ahh1, aer, aerA, hlyA, ascV and ascFG), but showed hemolytic activity on blood agar. GlcNAc was produced at 37 °C, pH 5.0, 2% (w/v) colloidal chitin and crude chitinase extracts (0.5 U mL -1 ) by all the isolates with yields from 14 to 85% at 6 h, 17-89% at 12 h and 19-93% after 24 h. The highest yield of GlcNAc was observed by A. caviae CH129 (93%). This study demonstrates one of the most efficient chitin enzymatic hydrolysis procedures and A. caviae isolates with great potential for chitinases expression and GlcNAc production.
A pre-marketing ALT signal predicts post-marketing liver safety.
Moylan, Cynthia A; Suzuki, Ayako; Papay, Julie I; Yuen, Nancy A; Ames, Michael; Hunt, Christine M
2012-08-01
Drug induced liver injury during drug development is evidenced by a higher incidence of serum alanine aminotransferase (ALT) elevations in treated versus placebo populations and termed an "ALT signal". We sought to quantify whether an ALT signal in pre-marketing clinical trials predicted post-marketing hepatotoxicity. Incidence of ALT elevations (ALT ≥ 3 times upper limits normal [× ULN]) for drug and placebo of new chemical entities and approved drugs associated with hepatotoxicity was calculated using the Food and Drug Administration (FDA) website. Post-marketing liver safety events were identified using the FDA Adverse Event Reporting System (AERS). The association of FDA AERS signal score (EB05 ≥ 2) and excess risk of pre-marketing ALT elevation (difference in incidence of ALT ≥ 3× ULN in treated versus placebo) was examined. An ALT signal of ≥ 1.2% was significantly associated with a post-marketing liver safety signal (p ≤ 0.013) and a 71.4% positive predictive value. An absent ALT signal was associated with a high likelihood of post-marketing liver safety; negative predictive value of 89.7%. Daily drug dose information improved the prediction of post-marketing liver safety. A cut-off of 1.2% increase in ALT ≥ 3× ULN in treated versus placebo groups provides an easily calculated method for predicting post-marketing liver safety. Published by Elsevier Inc.
Molecular mechanisms underlying the exceptional adaptations of batoid fins
Nakamura, Tetsuya; Klomp, Jeff; Pieretti, Joyce; Schneider, Igor; Gehrke, Andrew R.; Shubin, Neil H.
2015-01-01
Extreme novelties in the shape and size of paired fins are exemplified by extinct and extant cartilaginous and bony fishes. Pectoral fins of skates and rays, such as the little skate (Batoid, Leucoraja erinacea), show a strikingly unique morphology where the pectoral fin extends anteriorly to ultimately fuse with the head. This results in a morphology that essentially surrounds the body and is associated with the evolution of novel swimming mechanisms in the group. In an approach that extends from RNA sequencing to in situ hybridization to functional assays, we show that anterior and posterior portions of the pectoral fin have different genetic underpinnings: canonical genes of appendage development control posterior fin development via an apical ectodermal ridge (AER), whereas an alternative Homeobox (Hox)–Fibroblast growth factor (Fgf)–Wingless type MMTV integration site family (Wnt) genetic module in the anterior region creates an AER-like structure that drives anterior fin expansion. Finally, we show that GLI family zinc finger 3 (Gli3), which is an anterior repressor of tetrapod digits, is expressed in the posterior half of the pectoral fin of skate, shark, and zebrafish but in the anterior side of the pelvic fin. Taken together, these data point to both highly derived and deeply ancestral patterns of gene expression in skate pectoral fins, shedding light on the molecular mechanisms behind the evolution of novel fin morphologies. PMID:26644578
Ahola, Aila J; Harjutsalo, Valma; Forsblom, Carol; Groop, Per-Henrik
2014-08-01
Hypertension and depression are frequent comorbidities of diabetes. Studies suggest that antihypertensive medication affecting the renin-angiotensin-aldosterone system (RAAS) might also relieve depression. Whether this is also seen in patients with type 1 diabetes is not known. We therefore studied whether use of RAAS-modifying medication is associated with reduced antidepressant use in type 1 diabetes. In all, 1,705 participants in the FinnDiane Study were included (57 % men, mean age 46 ± 11 years). Data on medications were obtained from the Drug Prescription Register. Based on their albumin excretion rate (AER), the patients were classified as having normal AER, microalbuminuria, or macroalbuminuria. Diabetic nephropathy was defined as macroalbuminuria or end-stage renal disease (dialysis or renal transplant). A total of 8.4 and 10.9 % of patients with and without RAAS-modifying medication, respectively, had antidepressant medication purchases (NS). In logistic regression analysis, after adjusting for potential confounding factors, use of RAAS-modifying medication was not associated with antidepressant purchases. However, when patients with and without diabetic nephropathy were analyzed separately, RAAS-modifying medication was associated with lower frequency of antidepressant purchases among patients with established diabetic nephropathy. In conclusion, use of RAAS-modifying medication may improve mood in patients with type 1 diabetes and established diabetic nephropathy.
Sun, Jin; Li, Guang; Liang, WanZhen
2015-07-14
A real-time time-dependent density functional theory coupled with the classical electrodynamics finite difference time domain technique is employed to systematically investigate the optical properties of hybrid systems composed of silver nanoparticles (NPs) and organic adsorbates. The results demonstrate that the molecular absorption spectra throughout the whole energy range can be enhanced by the surface plasmon resonance of Ag NPs; however, the absorption enhancement ratio (AER) for each absorption band differs significantly from the others, leading to the quite different spectral profiles of the hybrid complexes in contrast to those of isolated molecules or sole NPs. Detailed investigations reveal that the AER is sensitive to the energy gap between the molecular excitation and plasmon modes. As anticipated, two separate absorption bands, corresponding to the isolated molecules and sole NPs, have been observed at a large energy gap. When the energy gap approaches zero, the molecular excitation strongly couples with the plasmon mode to form the hybrid exciton band, which possesses the significantly enhanced absorption intensity, a red-shifted peak position, a surprising strongly asymmetric shape of the absorption band, and the nonlinear Fano effect. Furthermore, the dependence of surface localized fields and the scattering response functions (SRFs) on the geometrical parameters of NPs, the NP-molecule separation distance, and the external-field polarizations has also been depicted.
Accumulating Evidence for a Drug–Drug Interaction Between Methotrexate and Proton Pump Inhibitors
Mackey, Ann Corken; Kluetz, Paul; Jappar, Dilara; Korvick, Joyce
2012-01-01
Background. A number of medications are known to interact with methotrexate through various mechanisms. The aim of this article is to apprise practitioners of a new labeling change based on the accumulating evidence for a possible drug–drug interaction between methotrexate (primarily at high doses) and proton pump inhibitors (PPIs). Methods. The U.S. Food and Drug Administration (FDA) Adverse Event Reporting System (AERS) database of spontaneous adverse event reports and the published literature were searched for cases reporting an interaction between methotrexate and PPIs. Results. A search of the AERS database and existing literature found several individual case reports of drug–drug interactions and three additional supportive studies that suggest potential underlying mechanisms for the interaction. Conclusion. There is evidence to suggest that concomitant use of methotrexate (primarily at high doses) with PPIs such as omeprazole, esomeprazole, and pantoprazole may decrease methotrexate clearance, leading to elevated serum levels of methotrexate and/or its metabolite hydroxymethotrexate, possibly leading to methotrexate toxicities. In several case reports, no methotrexate toxicity was found when a histamine H2 blocker was substituted for a PPI. Based on the reviewed data, the FDA updated the methotrexate label to include the possible drug–drug interaction between high-dose methotrexate and PPIs. Physicians should be alerted to this potential drug–drug interaction in patients receiving concomitant high-dose methotrexate and PPIs. PMID:22477728
Using Grid Benchmarks for Dynamic Scheduling of Grid Applications
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Hood, Robert
2003-01-01
Navigation or dynamic scheduling of applications on computational grids can be improved through the use of an application-specific characterization of grid resources. Current grid information systems provide a description of the resources, but do not contain any application-specific information. We define a GridScape as dynamic state of the grid resources. We measure the dynamic performance of these resources using the grid benchmarks. Then we use the GridScape for automatic assignment of the tasks of a grid application to grid resources. The scalability of the system is achieved by limiting the navigation overhead to a few percent of the application resource requirements. Our task submission and assignment protocol guarantees that the navigation system does not cause grid congestion. On a synthetic data mining application we demonstrate that Gridscape-based task assignment reduces the application tunaround time.
Automatic Keyword Extraction from Individual Documents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, Stuart J.; Engel, David W.; Cramer, Nicholas O.
2010-05-03
This paper introduces a novel and domain-independent method for automatically extracting keywords, as sequences of one or more words, from individual documents. We describe the method’s configuration parameters and algorithm, and present an evaluation on a benchmark corpus of technical abstracts. We also present a method for generating lists of stop words for specific corpora and domains, and evaluate its ability to improve keyword extraction on the benchmark corpus. Finally, we apply our method of automatic keyword extraction to a corpus of news articles and define metrics for characterizing the exclusivity, essentiality, and generality of extracted keywords within a corpus.
Design and development of a community carbon cycle benchmarking system for CMIP5 models
NASA Astrophysics Data System (ADS)
Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Randerson, J. T.
2013-12-01
Benchmarking has been widely used to assess the ability of atmosphere, ocean, sea ice, and land surface models to capture the spatial and temporal variability of observations during the historical period. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we designed and developed a software system that enables the user to specify the models, benchmarks, and scoring systems so that results can be tailored to specific model intercomparison projects. We used this system to evaluate the performance of CMIP5 Earth system models (ESMs). Our scoring system used information from four different aspects of climate, including the climatological mean spatial pattern of gridded surface variables, seasonal cycle dynamics, the amplitude of interannual variability, and long-term decadal trends. We used this system to evaluate burned area, global biomass stocks, net ecosystem exchange, gross primary production, and ecosystem respiration from CMIP5 historical simulations. Initial results indicated that the multi-model mean often performed better than many of the individual models for most of the observational constraints.
Roudsari, AV; Gordon, C; Gray, JA Muir
2001-01-01
Background In 1998, the U.K. National Health Service Information for Health Strategy proposed the implementation of a National electronic Library for Health to provide clinicians, healthcare managers and planners, patients and the public with easy, round the clock access to high quality, up-to-date electronic information on health and healthcare. The Virtual Branch Libraries are among the most important components of the National electronic Library for Health . They aim at creating online knowledge based communities, each concerned with some specific clinical and other health-related topics. Objectives This study is about the envisaged Dermatology Virtual Branch Libraries of the National electronic Library for Health . It aims at selecting suitable dermatology Web resources for inclusion in the forthcoming Virtual Branch Libraries after establishing preliminary quality benchmarking rules for this task. Psoriasis, being a common dermatological condition, has been chosen as a starting point. Methods Because quality is a principal concern of the National electronic Library for Health, the study includes a review of the major quality benchmarking systems available today for assessing health-related Web sites. The methodology of developing a quality benchmarking system has been also reviewed. Aided by metasearch Web tools, candidate resources were hand-selected in light of the reviewed benchmarking systems and specific criteria set by the authors. Results Over 90 professional and patient-oriented Web resources on psoriasis and dermatology in general are suggested for inclusion in the forthcoming Dermatology Virtual Branch Libraries. The idea of an all-in knowledge-hallmarking instrument for the National electronic Library for Health is also proposed based on the reviewed quality benchmarking systems. Conclusions Skilled, methodical, organized human reviewing, selection and filtering based on well-defined quality appraisal criteria seems likely to be the key ingredient in the envisaged National electronic Library for Health service. Furthermore, by promoting the application of agreed quality guidelines and codes of ethics by all health information providers and not just within the National electronic Library for Health, the overall quality of the Web will improve with time and the Web will ultimately become a reliable and integral part of the care space. PMID:11720947
Modernised Portuguese schools - From IAQ and thermal comfort towards energy efficiency plans
NASA Astrophysics Data System (ADS)
Pereira, Luisa Maria Dias
A major rehabilitation and refurbishment programme of secondary school buildings has been carried out in the last few years in Portugal, led by the state-owned company Parque Escolar E.P.E. (PE), known as Secondary School Buildings Modernisation Programme. This programme took into consideration renewable energy systems, mostly solar panels for domestic hot water (DHW) production. Nevertheless, with the introduction of HVAC systems in buildings that were previously naturally ventilated, an increase on energy consumption has been verified. During the first occupancy phase of new and refurbished buildings, energy and indoor climate quality (ICQ) audits are important strategies to improve the buildings’ energy use. In new buildings, the most common errors are due to poor operation and management. Schools energy management programmes often result in a list of energy efficiency measures that do not necessarily reflect occupants’ conditions or satisfaction. They are more directed to management control and comparison with benchmarks of energy use/m2 or cost/student to assess energy efficiency. In all cases, monitoring and consumption patterns are mandatory. In this context, this thesis aims at developing energy efficiency plans (EEP) for modernised Portuguese school buildings. The framework of the thesis starts with the development of an international overview of the recent research and development in the field of energy consumption in schools [searching for statistical benchmarks that could contribute to an accurate school building indicator (SBI)]. Then, based on a database provided by Parque Escolar, an energy consumption assessment of Portuguese school buildings is presented, between the pre and post intervention phases. Drawing on this procedure, eight representative modernised secondary schools were selected, geographically and climatically distributed. After, an energy audit and indoor environment quality (IEQ) monitoring is performed in this schools selection. The continuous monitoring period varied between schools, from a minimum of 48h monitoring up to three weeks, during the mid-season [spring - autumn period (excluding summer vacation) in 2013]. Air exchange rates (AER), more specifically infiltration rates, are quantified aiming at determining the current airtightness condition of the refurbished schools. A subjective IEQ assessment is also performed, focusing on occupants’ feedback, providing insight on the potential linkages between energy use and occupants’ satisfaction and comfort. The thesis builds on the current EEP panorama and practice, which is based only on cost/energy control, extending it to address the equilibrium between IEQ evaluation and occupants’ perceived conditions/preferences. This approach is applied in two schools - selected based on the previous study on energy and IEQ conditions of the eight schools. The EEP methodology starts by deepening the knowledge of each school, mostly focusing on crossing the schools occupancy schedule with systems operation [(mainly those controlled by the building management system (BMS)]. An analysis on recently updated legislation is also performed (in particular fresh air flow rates requirements). It is shown that some potential energy savings can be achieved and that IEQ conditions can be improved at very low or even negligible costs. Other considerations, namely addressing the thermal energy production systems of the schools (e.g., boilers scheduling), the lighting systems (e.g., lighting circuits) and non-controlled plug loads, are also mentioned. Based upon all these findings, a handbook of good practice is drafted for secondary school buildings in Portugal. This EEP is accompanied by a list of Energy Efficiency Measures (EEM). It is proposed that this document is headed by a School - Energy Performance Certificate (S-EPC) based on the billed energy consumption. This document suggests the establishment of the figure of the Energy Manager.
NASA Astrophysics Data System (ADS)
Steefel, C. I.
2015-12-01
Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.
NASA Technical Reports Server (NTRS)
Davis, G. J.
1994-01-01
One area of research of the Information Sciences Division at NASA Ames Research Center is devoted to the analysis and enhancement of processors and advanced computer architectures, specifically in support of automation and robotic systems. To compare systems' abilities to efficiently process Lisp and Ada, scientists at Ames Research Center have developed a suite of non-parallel benchmarks called ELAPSE. The benchmark suite was designed to test a single computer's efficiency as well as alternate machine comparisons on Lisp, and/or Ada languages. ELAPSE tests the efficiency with which a machine can execute the various routines in each environment. The sample routines are based on numeric and symbolic manipulations and include two-dimensional fast Fourier transformations, Cholesky decomposition and substitution, Gaussian elimination, high-level data processing, and symbol-list references. Also included is a routine based on a Bayesian classification program sorting data into optimized groups. The ELAPSE benchmarks are available for any computer with a validated Ada compiler and/or Common Lisp system. Of the 18 routines that comprise ELAPSE, provided within this package are 14 developed or translated at Ames. The others are readily available through literature. The benchmark that requires the most memory is CHOLESKY.ADA. Under VAX/VMS, CHOLESKY.ADA requires 760K of main memory. ELAPSE is available on either two 5.25 inch 360K MS-DOS format diskettes (standard distribution) or a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The ELAPSE benchmarks were written in 1990. VAX and VMS are trademarks of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
Gururaj, Anupama E.; Chen, Xiaoling; Pournejati, Saeid; Alter, George; Hersh, William R.; Demner-Fushman, Dina; Ohno-Machado, Lucila
2017-01-01
Abstract The rapid proliferation of publicly available biomedical datasets has provided abundant resources that are potentially of value as a means to reproduce prior experiments, and to generate and explore novel hypotheses. However, there are a number of barriers to the re-use of such datasets, which are distributed across a broad array of dataset repositories, focusing on different data types and indexed using different terminologies. New methods are needed to enable biomedical researchers to locate datasets of interest within this rapidly expanding information ecosystem, and new resources are needed for the formal evaluation of these methods as they emerge. In this paper, we describe the design and generation of a benchmark for information retrieval of biomedical datasets, which was developed and used for the 2016 bioCADDIE Dataset Retrieval Challenge. In the tradition of the seminal Cranfield experiments, and as exemplified by the Text Retrieval Conference (TREC), this benchmark includes a corpus (biomedical datasets), a set of queries, and relevance judgments relating these queries to elements of the corpus. This paper describes the process through which each of these elements was derived, with a focus on those aspects that distinguish this benchmark from typical information retrieval reference sets. Specifically, we discuss the origin of our queries in the context of a larger collaborative effort, the biomedical and healthCAre Data Discovery Index Ecosystem (bioCADDIE) consortium, and the distinguishing features of biomedical dataset retrieval as a task. The resulting benchmark set has been made publicly available to advance research in the area of biomedical dataset retrieval. Database URL: https://biocaddie.org/benchmark-data PMID:29220453
Buell, G.R.; Grams, S.C.
1985-01-01
Significant temporal trends in monthly pH, specific conductance, total alkalinity, hardness, total nitrite-plus-nitrite nitrogen, and total phosphorus measurements at five stream sites in Georgia were identified using a rank correlation technique, the seasonal Kendall test and slope estimator. These sites include a U.S. Geological Survey Hydrologic Bench-Mark site, Falling Creek near Juliette, and four periodic water-quality monitoring sites. Comparison of raw data trends with streamflow-residual trends and, where applicable, with chemical-discharge trends (instantaneous fluxes) shws that some of these trends are responses to factors other than changing streamflow. Percentages of forested, agricultural, and urban cover with each basin did not change much during the periods of water-quality record, and therefore these non-flow-related trends are not obviously related to changes in land cover or land use. Flow-residual water-quality trends at the Hydrologic Bench-Mark site and at the Chattooga River site probably indicate basin reponses to changes in the chemical quality of atmospheric deposition. These two basins are predominantly forested and have received little recent human use. Observed trends at the other three sites probably indicate basin responses to various land uses and water uses associated with agricultural and urban land or to changes in specific uses. (USGS)
Performance Evaluation of NoSQL Databases: A Case Study
2015-02-01
a centralized relational database. The customer decided to consider NoSQL technologies for two specific uses, namely: the primary data store for...17 custom specific 6. FU NoSQL availab data mo arking of data g a specific wo sin benchmark f hmark for tran le workload de o publish meas their...The choice of a particular NoSQL database imposes a specific distributed software architecture and data model, and is a major determinant of the
Adaptive Robust Estimation of Location and Scale Parameters of Symmetric Populations.
1978-09-01
theses , wh i ch reported the results of a smdll Monte Carlo study of the performances of va rious estimators . The authors wish to thank Lt Michael...Leon (1972). The Method of Least Squares and Some Al ternatives. AR!, TR 72—129 , Aer ospace Res ear ch Lab oratories , Wr ight—Patterson Air Force...5287 .8556 .7875 .7260 .5299 .8489 . 7845 .7242 (c) .9515 .6362 .2920 .4900 .9492 .6205 .2872 .4789 (d)(l) .6147 .8222 .6007 .6846 .6219 .8230 .6014
Operation TUMBLER-SNAPPER 1952
1982-06-14
coordinates are used in this report, as seen in table 1-1. The first three digits refer to I point on an east-west axis, and the second throe digits ...8217icultural ,ech S’tte ’No,. a4inlovvi as t ;’ib i Libary U ’ aerSji ’, T’" Lbra ": 2N L ibrn "Ne•w 5e,•t ’tall I it’ra ’vr Vn Ver t 1 .o0 ,rthl ’a l
Tribology of Langmuir-Blodgett Films
1992-03-01
poly - meric systems and the use of Langmuir - Blodgett films as lubricants. An, 1473 ESWnoW’oP’-oVsS osOLe UNCLASSIFIED SECUlRTV CLASSIFICATIONi O r ThIS...N/A N/A 4. TITLE (and Subtitle) S. TyPE OF REPORT & PERIOD COVERED Tribology of Langmuir - Blodgett Films Interim Technical Report 6. PERFORMING ORO...Co . - e o If neceesea mnd Identily by block number) Tribology, Langmuir - Blodgett Films 2 AerNACT t 44 Pem.e s* I peminp and tdolvart by block niber
1978-06-01
responses (AER) with a photo-optico-electronic method. Habituation was obtained by binaural stimulation with monotonously repeated clicki of a constant...Bonsper, D. E. The effect of increased information processing on sinus arrhythmia and heart beat Monti-y,7UalTfrniaITNaval Postgraduate School...September 1970. (NTIS No. ’’A’’A f śS4) This experiment investigated the eflt of increased information processing on heart beat and sinus arrhythmia. A
2002-01-01
Beginning with the December 2002 issue of the Energy Information Administration's Short-Term Energy Outlook (STEO), electricity generation and related fuel consumption totals will be presented on a basis that is consistent with the definitions and aggregates used in the 2001 edition of EIA's Annual Energy Review (AER). Particularly affected by these changes are the demand and balancing item totals for natural
NASA Astrophysics Data System (ADS)
Marenco, Franco; Ryder, Claire; Estellés, Victor; Segura, Sara; Amiridis, Vassilis; Proestakis, Emmanouil; Marinou, Eleni; Tsekeri, Alexandra; Smith, Helen; Ulanowski, Zbigniew; O'Sullivan, Debbie; Brooke, Jennifer; Pradhan, Yaswant; Buxmann, Joelle
2018-04-01
In August 2015, the AER-D campaign made use of the FAAM research aircraft based in Cape Verde, and targeted mineral dust. First results will be shown here. The campaign had multiple objectives: (1) lidar dust mapping for the validation of satellite and model products; (2) validation of sunphotometer remote sensing with airborne measurements; (3) coordinated measurements with the CATS lidar on the ISS; (4) radiative closure studies; and (5) the validation of a new model of dustsonde.
NASA Technical Reports Server (NTRS)
Tanaka, Kenneth L.
1986-01-01
A global stratigraphy of Mars was developed from a global geologic map series derived from Viking images; the stratigraphy is composed of three maps. A new chronostratigraphic classification system which consists of lower, middle, and upper Noachian, Hesperian, and Amazonian systems is described. The crater-density boundaries of the chronostratigraphic units and the absolute ages of the Martian epochs aer estimated. The relative ages of major geologic units and featues are calculated and analyzed. The geologic history of Mars is summarized on the maps in terms of epochs.
Hydrogen Environment Assisted Cracking of Ultra-High Strength AetMet(Trademark) 100 Steel
2006-01-01
landing gear. LV.B. Effect of Steel Composition on Intergranular HEAC Instances of intergranular HEAC and IiHAC in AerMetTm 100 were rarely observed in the...fit the H concentration effect with that of the other elements. While the Maraging and Custom 465TM steels are relatively pure, the H concentration...to -0.9 VsCE and increased cracking at more anodic and more cathodic potentials 471 . Similar effects were noted for HEAC of 18Ni Maraging steel , with
Megias, Daniel; Phillips, Mark; Clifton-Hadley, Laura; Harron, Elizabeth; Eaton, David J; Sanghera, Paul; Whitfield, Gillian
2017-03-01
The HIPPO trial is a UK randomized Phase II trial of hippocampal sparing (HS) vs conventional whole-brain radiotherapy after surgical resection or radiosurgery in patients with favourable prognosis with 1-4 brain metastases. Each participating centre completed a planning benchmark case as part of the dedicated radiotherapy trials quality assurance programme (RTQA), promoting the safe and effective delivery of HS intensity-modulated radiotherapy (IMRT) in a multicentre trial setting. Submitted planning benchmark cases were reviewed using visualization for radiotherapy software (VODCA) evaluating plan quality and compliance in relation to the HIPPO radiotherapy planning and delivery guidelines. Comparison of the planning benchmark data highlighted a plan specified using dose to medium as an outlier by comparison with those specified using dose to water. Further evaluation identified that the reported plan statistics for dose to medium were lower as a result of the dose calculated at regions of PTV inclusive of bony cranium being lower relative to brain. Specification of dose to water or medium remains a source of potential ambiguity and it is essential that as part of a multicentre trial, consideration is given to reported differences, particularly in the presence of bone. Evaluation of planning benchmark data as part of an RTQA programme has highlighted an important feature of HS IMRT dosimetry dependent on dose being specified to water or medium, informing the development and undertaking of HS IMRT as part of the HIPPO trial. Advances in knowledge: The potential clinical impact of differences between dose to medium and dose to water are demonstrated for the first time, in the setting of HS whole-brain radiotherapy.
Cleanliness audit of clinical surfaces and equipment: who cleans what?
Anderson, R E; Young, V; Stewart, M; Robertson, C; Dancer, S J
2011-07-01
Current guidelines recommend regular cleaning of clinical equipment. We monitored items on a surgical ward for predominant user, hand-touch frequency, cleaning responsibilities and measurement of organic soil. Equipment was assessed in triplicate against a cleanliness benchmark of 100 relative light units (RLU) using the Hygiena® ATP system. There were 44 items, of which 21 were cleaned by clinical support workers (CSWs), five by domestic staff; three by nurses, three by doctors, and 12 with no designated cleaning responsibility. Geometric mean RLUs ranged from 60 to 550/100 cm² for small items such as hand-gel containers, bed control, blood pressure cuff and clinical notes; with similar values of 80-540/100 cm² RLU for larger items such as electrocardiogram machine, defibrillator, trolleys and tables. Overall geometric mean was 249/100 cm² RLU for all surfaces, with 84% (37 of 44) items exceeding the 100RLU benchmark. Of 27 items cleaned by clinical staff, 24 (89%) failed the benchmark. Of 12 sites with no cleaning specification, 11 (92%) failed the benchmark. Three of seven 'clean' sites (<100/100 cm² RLU) were cleaned by domestic staff. Average log(10) RLU of surfaces cleaned by domestics were 64% lower compared with surfaces cleaned by CSWs (95% confidence interval: 35%, 80%; P=0.019). In conclusion, clinical equipment frequently demonstrates high levels of organic soil, whether or not items have assigned cleaning responsibility. These findings suggest that cleaning practices for clinical equipment may require review, along with education of staff with specific cleaning responsibilities. Copyright © 2011 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Albuquerque, Kevin; Rodgers, Kellie; Spangler, Ann; Rahimi, Asal; Willett, DuWayne
2018-03-01
The on-treatment visit (OTV) for radiation oncology is essential for patient management. Radiation toxicities recorded during the OTV may be inconsistent because of the use of free text and the lack of treatment site-specific templates. We developed a radiation oncology toxicity recording instrument (ROTOX) in a health system electronic medical record (EMR). Our aims were to assess improvement in documentation of toxicities and to develop clinic toxicity benchmarks. A ROTOX that was based on National Cancer Institute Common Terminology Criteria for Adverse Events (version 4.0) with flow-sheet functionality was developed in the EMR. Improvement in documentation was assessed at various time intervals. High-grade toxicities (ie, grade ≥ 3 by CTCAE) by site were audited to develop benchmarks and to track nursing and physician actions taken in response to these. A random sample of OTV notes from each clinic physician before ROTOX implementation was reviewed and assigned a numerical document quality score (DQS) that was based on completeness and comprehensiveness of toxicity grading. The mean DQS improved from an initial level of 41% to 99% (of the maximum possible DQS) when resampled at 6 months post-ROTOX. This high-level DQS was maintained 3 years after ROTOX implementation at 96% of the maximum. For months 7 to 9 after implementation (during a 3-month period), toxicity grading was recorded in 4,443 OTVs for 698 unique patients; 107 episodes of high-grade toxicity were identified during this period, and toxicity-specific intervention was documented in 95%. An EMR-based ROTOX enables consistent recording of treatment toxicity. In a uniform sample of patients, local population toxicity benchmarks can be developed, and clinic response can be tracked.
Present Status and Extensions of the Monte Carlo Performance Benchmark
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.
2014-06-01
The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, J; Dossa, D; Gokhale, M
Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less
ComprehensiveBench: a Benchmark for the Extensive Evaluation of Global Scheduling Algorithms
NASA Astrophysics Data System (ADS)
Pilla, Laércio L.; Bozzetti, Tiago C.; Castro, Márcio; Navaux, Philippe O. A.; Méhaut, Jean-François
2015-10-01
Parallel applications that present tasks with imbalanced loads or complex communication behavior usually do not exploit the underlying resources of parallel platforms to their full potential. In order to mitigate this issue, global scheduling algorithms are employed. As finding the optimal task distribution is an NP-Hard problem, identifying the most suitable algorithm for a specific scenario and comparing algorithms are not trivial tasks. In this context, this paper presents ComprehensiveBench, a benchmark for global scheduling algorithms that enables the variation of a vast range of parameters that affect performance. ComprehensiveBench can be used to assist in the development and evaluation of new scheduling algorithms, to help choose a specific algorithm for an arbitrary application, to emulate other applications, and to enable statistical tests. We illustrate its use in this paper with an evaluation of Charm++ periodic load balancers that stresses their characteristics.
Experimental Criticality Benchmarks for SNAP 10A/2 Reactor Cores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krass, A.W.
2005-12-19
This report describes computational benchmark models for nuclear criticality derived from descriptions of the Systems for Nuclear Auxiliary Power (SNAP) Critical Assembly (SCA)-4B experimental criticality program conducted by Atomics International during the early 1960's. The selected experimental configurations consist of fueled SNAP 10A/2-type reactor cores subject to varied conditions of water immersion and reflection under experimental control to measure neutron multiplication. SNAP 10A/2-type reactor cores are compact volumes fueled and moderated with the hydride of highly enriched uranium-zirconium alloy. Specifications for the materials and geometry needed to describe a given experimental configuration for a model using MCNP5 are provided. Themore » material and geometry specifications are adequate to permit user development of input for alternative nuclear safety codes, such as KENO. A total of 73 distinct experimental configurations are described.« less
Shaw, B E; Chapman, J; Fechter, M; Foeken, L; Greinix, H; Hwang, W; Phillips-Johnson, L; Korhonen, M; Lindberg, B; Navarro, W H; Szer, J
2013-11-01
Safety of living donors is critical to the success of blood, tissue and organ transplantation. Structured and robust vigilance and surveillance systems exist as part of some national entities, but historically no global systems are in place to ensure conformity, harmonisation and the recognition of rare adverse events (AEs). The World Health Assembly has recently resolved to require AE/reaction (AE/R) reporting both nationally and globally. The World Marrow Donor Association (WMDA) is an international organisation promoting the safety of unrelated donors and progenitor cell products for use in haematopoietic progenitor cell (HPC) transplantation. To address this issue, we established a system for collecting, collating, analysing, distributing and reacting to serious adverse events and reactions (SAE/R) in unrelated HPC donors. The WMDA successfully instituted this reporting system with 203 SAE/R reported in 2011. The committee generated two rapid reports, reacting to specific SAE/R, resulting in practice changing policies. The system has a robust governance structure, formal feedback to the WMDA membership and transparent information flows to other agencies, specialist physicians and transplant programs and the general public.
Undergraduate nursing students' perceptions regarding factors that affect math abilities
NASA Astrophysics Data System (ADS)
Pyo, Katrina A.
2011-07-01
A review of the nursing literature reveals many undergraduate nursing students lack proficiency with basic mathematical skills, those necessary for safe medication preparation and administration. Few studies exploring the phenomenon from the undergraduate nursing student perspective are reported in the nursing literature. The purpose of this study was to explore undergraduate nursing students’ perceptions of math abilities, factors that affect math abilities, the use of math in nursing, and the extent to which specific math skills were addressed throughout a nursing curriculum. Polya’s Model for Problem Solving and the Bloom’s Taxonomy of Educational Objectives, Affective Domain served as the theoretical background for the study. Qualitative and quantitative methods were utilized to obtain data from a purposive sample of undergraduate nursing students from a private university in western Pennsylvania. Participants were selected based on the proficiency level with math skills, as determined by a score on the Elsevier’s HESI™ Admission Assessment (A2) Exam, Math Portion. Ten students from the “Excellent” benchmark group and eleven students from the “Needing Additional Assistance or Improvement” benchmark group participated in one-on-one, semi-structured interviews, and completed a 25-item, 4-point Likert scale survey that rated confidence levels with specific math skills and the extent to which these skills were perceived to be addressed in the nursing curriculum. Responses from the two benchmark groups were compared and contrasted. Eight themes emerged from the qualitative data. Findings related to mathematical approach and confidence levels with specific math skills were determined to be statistically significant.
Finite Element Modeling of the World Federation's Second MFL Benchmark Problem
NASA Astrophysics Data System (ADS)
Zeng, Zhiwei; Tian, Yong; Udpa, Satish; Udpa, Lalita
2004-02-01
This paper presents results obtained by simulating the second magnetic flux leakage benchmark problem proposed by the World Federation of NDE Centers. The geometry consists of notches machined on the internal and external surfaces of a rotating steel pipe that is placed between two yokes that are part of a magnetic circuit energized by an electromagnet. The model calculates the radial component of the leaked field at specific positions. The nonlinear material property of the ferromagnetic pipe is taken into account in simulating the problem. The velocity effect caused by the rotation of the pipe is, however, ignored for reasons of simplicity.
A CPU benchmark for protein crystallographic refinement.
Bourne, P E; Hendrickson, W A
1990-01-01
The CPU time required to complete a cycle of restrained least-squares refinement of a protein structure from X-ray crystallographic data using the FORTRAN codes PROTIN and PROLSQ are reported for 48 different processors, ranging from single-user workstations to supercomputers. Sequential, vector, VLIW, multiprocessor, and RISC hardware architectures are compared using both a small and a large protein structure. Representative compile times for each hardware type are also given, and the improvement in run-time when coding for a specific hardware architecture considered. The benchmarks involve scalar integer and vector floating point arithmetic and are representative of the calculations performed in many scientific disciplines.
Nair, Pradeep S; John, Eugene B
2007-01-01
Aligning specific sequences against a very large number of other sequences is a central aspect of bioinformatics. With the widespread availability of personal computers in biology laboratories, sequence alignment is now often performed locally. This makes it necessary to analyse the performance of personal computers for sequence aligning bioinformatics benchmarks. In this paper, we analyse the performance of a personal computer for the popular BLAST and FASTA sequence alignment suites. Results indicate that these benchmarks have a large number of recurring operations and use memory operations extensively. It seems that the performance can be improved with a bigger L1-cache.
Fitzpatrick, Christopher; Fleming, Fiona M.; Madin-Warburton, Matthew; Schneider, Timm; Meheus, Filip; Asiedu, Kingsley; Solomon, Anthony W.; Montresor, Antonio; Biswas, Gautam
2016-01-01
Background Advocacy around mass treatment for the elimination of selected Neglected Tropical Diseases (NTDs) has typically put the cost per person treated at less than US$ 0.50. Whilst useful for advocacy, the focus on a single number misrepresents the complexity of delivering “free” donated medicines to about a billion people across the world. We perform a literature review and meta-regression of the cost per person per round of mass treatment against NTDs. We develop a web-based software application (https://healthy.shinyapps.io/benchmark/) to calculate setting-specific unit costs against which programme budgets and expenditures or results-based pay-outs can be benchmarked. Methods We reviewed costing studies of mass treatment for the control, elimination or eradication of lymphatic filariasis, schistosomiasis, soil-transmitted helminthiasis, onchocerciasis, trachoma and yaws. These are the main 6 NTDs for which mass treatment is recommended. We extracted financial and economic unit costs, adjusted to a standard definition and base year. We regressed unit costs on the number of people treated and other explanatory variables. Regression results were used to “predict” country-specific unit cost benchmarks. Results We reviewed 56 costing studies and included in the meta-regression 34 studies from 23 countries and 91 sites. Unit costs were found to be very sensitive to economies of scale, and the decision of whether or not to use local volunteers. Financial unit costs are expected to be less than 2015 US$ 0.50 in most countries for programmes that treat 100 thousand people or more. However, for smaller programmes, including those in the “last mile”, or those that cannot rely on local volunteers, both economic and financial unit costs are expected to be higher. Discussion The available evidence confirms that mass treatment offers a low cost public health intervention on the path towards universal health coverage. However, more costing studies focussed on elimination are needed. Unit cost benchmarks can help in monitoring value for money in programme plans, budgets and accounts, or in setting a reasonable pay-out for results-based financing mechanisms. PMID:27918573
Fitzpatrick, Christopher; Fleming, Fiona M; Madin-Warburton, Matthew; Schneider, Timm; Meheus, Filip; Asiedu, Kingsley; Solomon, Anthony W; Montresor, Antonio; Biswas, Gautam
2016-12-01
Advocacy around mass treatment for the elimination of selected Neglected Tropical Diseases (NTDs) has typically put the cost per person treated at less than US$ 0.50. Whilst useful for advocacy, the focus on a single number misrepresents the complexity of delivering "free" donated medicines to about a billion people across the world. We perform a literature review and meta-regression of the cost per person per round of mass treatment against NTDs. We develop a web-based software application (https://healthy.shinyapps.io/benchmark/) to calculate setting-specific unit costs against which programme budgets and expenditures or results-based pay-outs can be benchmarked. We reviewed costing studies of mass treatment for the control, elimination or eradication of lymphatic filariasis, schistosomiasis, soil-transmitted helminthiasis, onchocerciasis, trachoma and yaws. These are the main 6 NTDs for which mass treatment is recommended. We extracted financial and economic unit costs, adjusted to a standard definition and base year. We regressed unit costs on the number of people treated and other explanatory variables. Regression results were used to "predict" country-specific unit cost benchmarks. We reviewed 56 costing studies and included in the meta-regression 34 studies from 23 countries and 91 sites. Unit costs were found to be very sensitive to economies of scale, and the decision of whether or not to use local volunteers. Financial unit costs are expected to be less than 2015 US$ 0.50 in most countries for programmes that treat 100 thousand people or more. However, for smaller programmes, including those in the "last mile", or those that cannot rely on local volunteers, both economic and financial unit costs are expected to be higher. The available evidence confirms that mass treatment offers a low cost public health intervention on the path towards universal health coverage. However, more costing studies focussed on elimination are needed. Unit cost benchmarks can help in monitoring value for money in programme plans, budgets and accounts, or in setting a reasonable pay-out for results-based financing mechanisms.
Tremblay, Marlène; Hess, Justin P; Christenson, Brock M; McIntyre, Kolby K; Smink, Ben; van der Kamp, Arjen J; de Jong, Lisanne G; Döpfer, Dörte
2016-07-01
Automatic milking systems (AMS) are implemented in a variety of situations and environments. Consequently, there is a need to characterize individual farming practices and regional challenges to streamline management advice and objectives for producers. Benchmarking is often used in the dairy industry to compare farms by computing percentile ranks of the production values of groups of farms. Grouping for conventional benchmarking is commonly limited to the use of a few factors such as farms' geographic region or breed of cattle. We hypothesized that herds' production data and management information could be clustered in a meaningful way using cluster analysis and that this clustering approach would yield better peer groups of farms than benchmarking methods based on criteria such as country, region, breed, or breed and region. By applying mixed latent-class model-based cluster analysis to 529 North American AMS dairy farms with respect to 18 significant risk factors, 6 clusters were identified. Each cluster (i.e., peer group) represented unique management styles, challenges, and production patterns. When compared with peer groups based on criteria similar to the conventional benchmarking standards, the 6 clusters better predicted milk produced (kilograms) per robot per day. Each cluster represented a unique management and production pattern that requires specialized advice. For example, cluster 1 farms were those that recently installed AMS robots, whereas cluster 3 farms (the most northern farms) fed high amounts of concentrates through the robot to compensate for low-energy feed in the bunk. In addition to general recommendations for farms within a cluster, individual farms can generate their own specific goals by comparing themselves to farms within their cluster. This is very comparable to benchmarking but adds the specific characteristics of the peer group, resulting in better farm management advice. The improvement that cluster analysis allows for is characterized by the multivariable approach and the fact that comparisons between production units can be accomplished within a cluster and between clusters as a choice. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Lang, Carrie L; Simon, Diane; Kilgore, Jane
The American College of Surgeons Committee on Trauma revised the Resources for Optimal Care of the Injured Patient to include the criteria for trauma centers to participate in a risk-adjusted benchmarking system. Trauma Quality Improvement Program is currently the risk-adjusted benchmarking program sponsored by the American College of Surgeons, which will be required of all trauma centers to participate in early 2017. Prior to this, there were no risk-adjusted programs for Level III verified trauma centers. The Ohio Society of Trauma Nurse Leaders is a collaborative group made up of trauma program managers, coordinators, and other trauma leaders who meet 6 times a year. Within this group, a Level III Subcommittee was formed initially to provide a place for the Level III centers to discuss issues specific to the Level III centers. When the new requirement regarding risk-adjustment became official, the subcommittee agreed to begin reporting simple data points with the idea to risk adjust in the future.
Equilibrium Partitioning Sediment Benchmarks (ESBs) for the ...
This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it accounts for the varying bioavailability of chemicals in different sediments and allows for the incorporation of the appropriate biological effects concentration. This provides for the derivation of benchmarks that are causally linked to the specific chemical, applicable across sediments, and appropriately protective of benthic organisms. This equilibrium partitioning sediment benchmark (ESB) document was prepared by scientists from the Atlantic Ecology Division, Mid-Continent Ecology Division, and Western Ecology Division, the Office of Water, and private consultants. The document describes procedures to determine the interstitial water concentrations of nonionic organic chemicals in contaminated sediments. Based on these concentrations, guidance is provided on the derivation of toxic units to assess whether the sediments are likely to cause adverse effects to benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it is based on the concentrations of chemical(s) that are known to be harmful and bioavailable in the environment. This document, and five others published over the last nine years, will be useful for the Program Offices, including Superfund, a
Evaluation of the Pool Critical Assembly Benchmark with Explicitly-Modeled Geometry using MCNP6
Kulesza, Joel A.; Martz, Roger Lee
2017-03-01
Despite being one of the most widely used benchmarks for qualifying light water reactor (LWR) radiation transport methods and data, no benchmark calculation of the Oak Ridge National Laboratory (ORNL) Pool Critical Assembly (PCA) pressure vessel wall benchmark facility (PVWBF) using MCNP6 with explicitly modeled core geometry exists. As such, this paper provides results for such an analysis. First, a criticality calculation is used to construct the fixed source term. Next, ADVANTG-generated variance reduction parameters are used within the final MCNP6 fixed source calculations. These calculations provide unadjusted dosimetry results using three sets of dosimetry reaction cross sections of varyingmore » ages (those packaged with MCNP6, from the IRDF-2002 multi-group library, and from the ACE-formatted IRDFF v1.05 library). These results are then compared to two different sets of measured reaction rates. The comparison agrees in an overall sense within 2% and on a specific reaction- and dosimetry location-basis within 5%. Except for the neptunium dosimetry, the individual foil raw calculation-to-experiment comparisons usually agree within 10% but is typically greater than unity. Finally, in the course of developing these calculations, geometry that has previously not been completely specified is provided herein for the convenience of future analysts.« less
Real-time classification and sensor fusion with a spiking deep belief network.
O'Connor, Peter; Neil, Daniel; Liu, Shih-Chii; Delbruck, Tobi; Pfeiffer, Michael
2013-01-01
Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input.
Silva-Stenico, M E; Rigonato, J; Leal, M G; Vaz, M G M V; Andreote, A P D; Fiore, M F
2012-01-01
Cyanobacteria possess the ability to produce compounds with remarkable biological activity, and have thus attracted the attention of the pharmaceutical industry. Cyanopeptides acting as protease inhibitors have shown potential in the field of pharmacotherapy through regulation of abnormal physiological processes in the human body. Despite the already described cyanopeptide protease inhibitors, the search for new congeners is of considerable interest which may pave the way for more efficient molecules. In this study, the presence of the protease inhibitors aeruginosin and cyanopeptolin with non-, mono- and dichlorination and also genes coding for their synthetases was investigated in 90 cyanobacterial strains. Mass spectrometry analyses highlighted production of 91, 19 and 3 non-, mono- and dichlorinated congeners, respectively. The purified extract of Microcystis botrys SPC759 inhibited 61% of pepsin protease. PCR amplifications of aeruginosin and cyanopeptolin synthetase gene regions were observed in 41 and 28% of evaluated strains, respectively. The sequences obtained for the aerA-aerB (aeruginosin) and mcnC-mcnE (cyanopeptolin) gene regions grouped together with their homologues found in other cyanobacterial strains in the phylogenetic analyses with high bootstrap support. Antimicrobial activity assays performed using all intracellular extracts inhibited 31 and 26% of Gram-negative and Gram-positive pathogenic bacterial growth, respectively. The results of this study showed the production of aeruginosin and cyanopeptolin and the presence of their genes in several cyanobacterial genera for the first time besides the discovery of novel congeners.
Perthes Disease: The Quality and Reliability of Information on the Internet.
Nassiri, Mujtaba; Bruce-Brand, Robert A; O'Neill, Francis; Chenouri, Shojaeddin; Curtin, Paul
2015-01-01
Research has shown that up to 89% of parents used the Internet to seek health information regarding their child's medical condition. Much of the information on the Internet is valuable; however, the quality of health information is variable and unregulated. The aim of this study was to evaluate the quality and content of information about Perthes disease on the Internet using recognized scoring systems, identification of quality markers, and describe a novel specific score. We searched the top 3 search engines (Google, Yahoo!, and Bing) for the following keywords: "Perthes disease." Forty-five unique Web sites were identified. The Web sites were then categorized by type and assessed using the DISCERN score, the Journal of the American Medical Association (JAMA) benchmark criteria, and a novel Perthes-specific Content score. The presence of the Health On the Net (HON) code, a reported quality assurance marker, was noted. Of the Web sites analyzed, the Majority were Governmental and Nonprofit Organizations (NPO) (37.8%), followed by commercial Web sites (22.2%). Only 6 of the Web sites were HONcode certified. The mean DISCERN score was 53.1 (SD=9.0). The Governmental and NPO Web sites had the highest overall DISCERN scores followed closely by Physician Web sites. The mean JAMA benchmark criteria score was 2.1 (SD=1.2). Nine Web sites had maximal scores and the Academic Web sites had the highest overall JAMA benchmark scores. DISCERN scores, JAMA benchmark scores, and Perthes-specific Content scores were all greater for Web sites that bore the HONcode seal. The quality of information available online regarding Perthes disease is of variable quality. Governmental and NPO Web sites predominate and also provide higher quality content. The HONcode seal is a reliable indicator of Web site quality. Physicians should recommend the HONcode seal to their patients as a reliable indicator of Web site quality or, better yet, refer patients to sites they have personally reviewed. Supplying parents with a guide to health information on the Internet will help exclude Web sites as sources of misinformation.
Benchmarking of Typical Meteorological Year datasets dedicated to Concentrated-PV systems
NASA Astrophysics Data System (ADS)
Realpe, Ana Maria; Vernay, Christophe; Pitaval, Sébastien; Blanc, Philippe; Wald, Lucien; Lenoir, Camille
2016-04-01
Accurate analysis of meteorological and pyranometric data for long-term analysis is the basis of decision-making for banks and investors, regarding solar energy conversion systems. This has led to the development of methodologies for the generation of Typical Meteorological Years (TMY) datasets. The most used method for solar energy conversion systems was proposed in 1978 by the Sandia Laboratory (Hall et al., 1978) considering a specific weighted combination of different meteorological variables with notably global, diffuse horizontal and direct normal irradiances, air temperature, wind speed, relative humidity. In 2012, a new approach was proposed in the framework of the European project FP7 ENDORSE. It introduced the concept of "driver" that is defined by the user as an explicit function of the pyranometric and meteorological relevant variables to improve the representativeness of the TMY datasets with respect the specific solar energy conversion system of interest. The present study aims at comparing and benchmarking different TMY datasets considering a specific Concentrated-PV (CPV) system as the solar energy conversion system of interest. Using long-term (15+ years) time-series of high quality meteorological and pyranometric ground measurements, three types of TMY datasets generated by the following methods: the Sandia method, a simplified driver with DNI as the only representative variable and a more sophisticated driver. The latter takes into account the sensitivities of the CPV system with respect to the spectral distribution of the solar irradiance and wind speed. Different TMY datasets from the three methods have been generated considering different numbers of years in the historical dataset, ranging from 5 to 15 years. The comparisons and benchmarking of these TMY datasets are conducted considering the long-term time series of simulated CPV electric production as a reference. The results of this benchmarking clearly show that the Sandia method is not suitable for CPV systems. For these systems, the TMY datasets obtained using dedicated drivers (DNI only or more precise one) are more representative to derive TMY datasets from limited long-term meteorological dataset.
Denoising DNA deep sequencing data—high-throughput sequencing errors and their correction
Laehnemann, David; Borkhardt, Arndt
2016-01-01
Characterizing the errors generated by common high-throughput sequencing platforms and telling true genetic variation from technical artefacts are two interdependent steps, essential to many analyses such as single nucleotide variant calling, haplotype inference, sequence assembly and evolutionary studies. Both random and systematic errors can show a specific occurrence profile for each of the six prominent sequencing platforms surveyed here: 454 pyrosequencing, Complete Genomics DNA nanoball sequencing, Illumina sequencing by synthesis, Ion Torrent semiconductor sequencing, Pacific Biosciences single-molecule real-time sequencing and Oxford Nanopore sequencing. There is a large variety of programs available for error removal in sequencing read data, which differ in the error models and statistical techniques they use, the features of the data they analyse, the parameters they determine from them and the data structures and algorithms they use. We highlight the assumptions they make and for which data types these hold, providing guidance which tools to consider for benchmarking with regard to the data properties. While no benchmarking results are included here, such specific benchmarks would greatly inform tool choices and future software development. The development of stand-alone error correctors, as well as single nucleotide variant and haplotype callers, could also benefit from using more of the knowledge about error profiles and from (re)combining ideas from the existing approaches presented here. PMID:26026159
Benchmarking reference services: step by step.
Buchanan, H S; Marshall, J G
1996-01-01
This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.
Marcovecchio, M L; de Giorgis, T; Di Giovanni, I; Chiavaroli, V; Chiarelli, F; Mohn, A
2017-06-01
To evaluate whether circulating markers of endothelial dysfunction, such as intercellular adhesion molecule-1 (ICAM-1) and myeloperoxidase (MPO), are increased in youth with obesity and in those with type 1 diabetes (T1D) at similar levels, and whether their levels are associated with markers of renal function. A total of 60 obese youth [M/F: 30/30, age: 12.5 ± 2.8 yr; body mass index (BMI) z-score: 2.26 ± 0.46], 30 with T1D (M/F: 15/15; age: 12.9 ± 2.4 yr; BMI z-score: 0.45 ± 0.77), and 30 healthy controls (M/F: 15/15, age: 12.4 ± 3.3 yr, BMI z-score: -0.25 ± 0.56) were recruited. Anthropometric measurements were assessed and a blood sample was collected to measure ICAM-1, MPO, creatinine, cystatin C and lipid levels. A 24-h urine collection was obtained for assessing albumin excretion rate (AER). Levels of ICAM-1 and MPO were significantly higher in obese [ICAM-1: 0.606 (0.460-1.033) µg/mL; MPO: 136.6 (69.7-220.8) ng/mL] and T1D children [ICAM-1: 0.729 (0.507-0.990) µg/mL; MPO: 139.5 (51.0-321.3) ng/mL] compared with control children [ICAM-1: 0.395 (0.272-0.596) µg/mL MPO: 41.3 (39.7-106.9) ng/mL], whereas no significant difference was found between T1D and obese children. BMI z-score was significantly associated with ICAM-1 (β = 0.21, p = 0.02) and MPO (β = 0.41, p < 0.001). A statistically significant association was also found between ICAM-1 and markers of renal function (AER: β = 0.21, p = 0.03; e-GFR: β = 0.19, p = 0.04), after adjusting for BMI. Obese children have increased markers of endothelial dysfunction and early signs of renal damage, similarly to children with T1D, confirming obesity to be a cardiovascular risk factor as T1D. The association between ICAM-1 with e-GFR and AER confirm the known the association between general endothelial and renal dysfunction. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Tay, Jeannie; Thompson, Campbell H; Luscombe-Marsh, Natalie D; Noakes, Manny; Buckley, Jonathan D; Wittert, Gary A; Brinkworth, Grant D
2015-11-01
To compare the long-term effects of a very low carbohydrate, high-protein, low saturated fat (LC) diet with a traditional high unrefined carbohydrate, low-fat (HC) diet on markers of renal function in obese adults with type 2 diabetes (T2DM), but without overt kidney disease.One hundred fifteen adults (BMI 34.6 ± 4.3 kg/m, age 58 ± 7 years, HbA1c 7.3 ± 1.1%, 56 ± 12 mmol/mol, serum creatinine (SCr) 69 ± 15 μmol/L, glomerular filtration rate estimated by the Chronic Kidney Disease Epidemiology Collaboration formula (eGFR 94 ± 12 mL/min/1.73 m)) were randomized to consume either an LC (14% energy as carbohydrate [CHO < 50 g/day], 28% protein [PRO], 58% fat [<10% saturated fat]) or an HC (53% CHO, 17% PRO, 30% fat [<10% saturated fat]) energy-matched, weight-loss diet combined with supervised exercise training (60 min, 3 day/wk) for 12 months. Body weight, blood pressure, and renal function assessed by eGFR, estimated creatinine clearance (Cockcroft-Gault, Salazar-Corcoran) and albumin excretion rate (AER), were measured pre- and post-intervention.Both groups achieved similar completion rates (LC 71%, HC 65%) and reductions in weight (mean [95% CI]; -9.3 [-10.6, -8.0] kg) and blood pressure (-6 [-9, -4]/-6[-8, -5] mmHg), P ≥ 0.18. Protein intake calculated from 24 hours urinary urea was higher in the LC than HC group (LC 120.1 ± 38.2 g/day, 1.3 g/kg/day; HC 95.8 ± 27.8 g/day, 1 g/kg/day), P < 0.001 diet effect. Changes in SCr (LC 3 [1, 5], HC 1 [-1, 3] μmol/L) and eGFR (LC -4 [-6, -2], HC -2 [-3, 0] mL/min/1.73 m) did not differ between diets (P = 0.25). AER decreased independent of diet composition (LC --2.4 [-6, 1.2], HC -1.8 [-5.4, 1.8] mg/24 h, P = 0.24); 6 participants (LC 3, HC 3) had moderately elevated AER at baseline (30-300 mg/24 h), which normalized in 4 participants (LC 2, HC 2) after 52 weeks.Compared with a traditional HC weight loss diet, consumption of an LC high protein diet does not adversely affect clinical markers of renal function in obese adults with T2DM and no preexisting kidney disease.
Educating Next Generation Nuclear Criticality Safety Engineers at the Idaho National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. D. Bess; J. B. Briggs; A. S. Garcia
2011-09-01
One of the challenges in educating our next generation of nuclear safety engineers is the limitation of opportunities to receive significant experience or hands-on training prior to graduation. Such training is generally restricted to on-the-job-training before this new engineering workforce can adequately provide assessment of nuclear systems and establish safety guidelines. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) can provide students and young professionals the opportunity to gain experience and enhance critical engineering skills. The ICSBEP and IRPhEP publish annual handbooks that contain evaluations of experiments along withmore » summarized experimental data and peer-reviewed benchmark specifications to support the validation of neutronics codes, nuclear cross-section data, and the validation of reactor designs. Participation in the benchmark process not only benefits those who use these Handbooks within the international community, but provides the individual with opportunities for professional development, networking with an international community of experts, and valuable experience to be used in future employment. Traditionally students have participated in benchmarking activities via internships at national laboratories, universities, or companies involved with the ICSBEP and IRPhEP programs. Additional programs have been developed to facilitate the nuclear education of students while participating in the benchmark projects. These programs include coordination with the Center for Space Nuclear Research (CSNR) Next Degree Program, the Collaboration with the Department of Energy Idaho Operations Office to train nuclear and criticality safety engineers, and student evaluations as the basis for their Master's thesis in nuclear engineering.« less
2013-01-01
Background The objective of screening programs is to discover life threatening diseases in as many patients as early as possible and to increase the chance of survival. To be able to compare aspects of health care quality, methods are needed for benchmarking that allow comparisons on various health care levels (regional, national, and international). Objectives Applications and extensions of algorithms can be used to link the information on disease phases with relative survival rates and to consolidate them in composite measures. The application of the developed SAS-macros will give results for benchmarking of health care quality. Data examples for breast cancer care are given. Methods A reference scale (expected, E) must be defined at a time point at which all benchmark objects (observed, O) are measured. All indices are defined as O/E, whereby the extended standardized screening-index (eSSI), the standardized case-mix-index (SCI), the work-up-index (SWI), and the treatment-index (STI) address different health care aspects. The composite measures called overall-performance evaluation (OPE) and relative overall performance indices (ROPI) link the individual indices differently for cross-sectional or longitudinal analyses. Results Algorithms allow a time point and a time interval associated comparison of the benchmark objects in the indices eSSI, SCI, SWI, STI, OPE, and ROPI. Comparisons between countries, states and districts are possible. Exemplarily comparisons between two countries are made. The success of early detection and screening programs as well as clinical health care quality for breast cancer can be demonstrated while the population’s background mortality is concerned. Conclusions If external quality assurance programs and benchmark objects are based on population-based and corresponding demographic data, information of disease phase and relative survival rates can be combined to indices which offer approaches for comparative analyses between benchmark objects. Conclusions on screening programs and health care quality are possible. The macros can be transferred to other diseases if a disease-specific phase scale of prognostic value (e.g. stage) exists. PMID:23316692
NASA Astrophysics Data System (ADS)
Jiang, J.; Kaloti, A. P.; Levinson, H. R.; Nguyen, N.; Puckett, E. G.; Lokavarapu, H. V.
2016-12-01
We present the results of three standard benchmarks for the new active tracer particle algorithm in ASPECT. The three benchmarks are SolKz, SolCx, and SolVI (also known as the 'inclusion benchmark') first proposed by Duretz, May, Gerya, and Tackley (G Cubed, 2011) and in subsequent work by Theilman, May, and Kaus (Pure and Applied Geophysics, 2014). Each of the three benchmarks compares the accuracy of the numerical solution to a steady (time-independent) solution of the incompressible Stokes equations with a known exact solution. These benchmarks are specifically designed to test the accuracy and effectiveness of the numerical method when the viscosity varies up to six orders of magnitude. ASPECT has been shown to converge to the exact solution of each of these benchmarks at the correct design rate when all of the flow variables, including the density and viscosity, are discretized on the underlying finite element grid (Krobichler, Heister, and Bangerth, GJI, 2012). In our work we discretize the density and viscosity by initially placing the true values of the density and viscosity at the intial particle positions. At each time step, including the initialization step, the density and viscosity are interpolated from the particles onto the finite element grid. The resulting Stokes system is solved for the velocity and pressure, and the particle positions are advanced in time according to this new, numerical, velocity field. Note that this procedure effectively changes a steady solution of the Stokes equaton (i.e., one that is independent of time) to a solution of the Stokes equations that is time dependent. Furthermore, the accuracy of the active tracer particle algorithm now also depends on the accuracy of the interpolation algorithm and of the numerical method one uses to advance the particle positions in time. Finally, we will present new interpolation algorithms designed to increase the overall accuracy of the active tracer algorithms in ASPECT and interpolation algotithms designed to conserve properties, such as mass density, that are being carried by the particles.
Rand, Hugh; Shumway, Martin; Trees, Eija K.; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E.; Defibaugh-Chavez, Stephanie; Carleton, Heather A.; Klimke, William A.; Katz, Lee S.
2017-01-01
Background As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. Methods We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and “known” phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Results Our “outbreak” benchmark datasets represent the four major foodborne bacterial pathogens (Listeria monocytogenes, Salmonella enterica, Escherichia coli, and Campylobacter jejuni) and one simulated dataset where the “known tree” can be accurately called the “true tree”. The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. Discussion These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross-institutional collaborations. Our work is part of a global effort to provide collaborative infrastructure for sequence data and analytic tools—we welcome additional benchmark datasets in our recommended format, and, if relevant, we will add these on our GitHub site. Together, these datasets, dataset format, and the underlying GitHub infrastructure present a recommended path for worldwide standardization of phylogenomic pipelines. PMID:29372115
Timme, Ruth E; Rand, Hugh; Shumway, Martin; Trees, Eija K; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E; Defibaugh-Chavez, Stephanie; Carleton, Heather A; Klimke, William A; Katz, Lee S
2017-01-01
As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and "known" phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Our "outbreak" benchmark datasets represent the four major foodborne bacterial pathogens ( Listeria monocytogenes , Salmonella enterica , Escherichia coli , and Campylobacter jejuni ) and one simulated dataset where the "known tree" can be accurately called the "true tree". The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross-institutional collaborations. Our work is part of a global effort to provide collaborative infrastructure for sequence data and analytic tools-we welcome additional benchmark datasets in our recommended format, and, if relevant, we will add these on our GitHub site. Together, these datasets, dataset format, and the underlying GitHub infrastructure present a recommended path for worldwide standardization of phylogenomic pipelines.
Benchmarking passive transfer of immunity and growth in dairy calves.
Atkinson, D J; von Keyserlingk, M A G; Weary, D M
2017-05-01
Poor health and growth in young dairy calves can have lasting effects on their development and future production. This study benchmarked calf-rearing outcomes in a cohort of Canadian dairy farms, reported these findings back to producers and their veterinarians, and documented the results. A total of 18 Holstein dairy farms were recruited, all in British Columbia. Blood samples were collected from calves aged 1 to 7 d. We estimated serum total protein levels using digital refractometry, and failure of passive transfer (FPT) was defined as values below 5.2 g/dL. We estimated average daily gain (ADG) for preweaned heifers (1 to 70 d old) using heart-girth tape measurements, and analyzed early (≤35 d) and late (>35 d) growth separately. At first assessment, the average farm FPT rate was 16%. Overall, ADG was 0.68 kg/d, with early and late growth rates of 0.51 and 0.90 kg/d, respectively. Following delivery of the benchmark reports, all participants volunteered to undergo a second assessment. The majority (83%) made at least 1 change in their colostrum-management or milk-feeding practices, including increased colostrum at first feeding, reduced time to first colostrum, and increased initial and maximum daily milk allowances. The farms that made these changes experienced improved outcomes. On the 11 farms that made changes to improve colostrum feeding, the rate of FPT declined from 21 ± 10% before benchmarking to 11 ± 10% after making the changes. On the 10 farms that made changes to improve calf growth, ADG improved from 0.66 ± 0.09 kg/d before benchmarking to 0.72 ± 0.08 kg/d after making the management changes. Increases in ADG were greatest in the early milk-feeding period, averaging 0.13 kg/d higher than pre-benchmarking values for calves ≤35 d of age. Benchmarking specific outcomes associated with calf rearing can motivate producer engagement in calf care, leading to improved outcomes for calves on farms that apply relevant management changes. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Facility Energy Performance Benchmarking in a Data-Scarce Environment
2017-08-01
environment, and analyze occupant-, system-, and component-level faults contributing to energy in- efficiency. A methodology for developing DoD-specific...Research, Development, Test, and Evaluation (RDTE) Program to develop an intelligent framework, encompassing methodology and model- ing, that...energy performers by installation, climate zone, and other criteria. A methodology for creating the DoD-specific EUIs would be an important part of a
ERIC Educational Resources Information Center
Storkel, Holly L.; Komesidou, Rouzana; Fleming, Kandace K.; Romine, Rebecca Swinburne
2017-01-01
Purpose: The goal of this study was to provide guidance to clinicians on early benchmarks of successful word learning in an interactive book reading treatment and to examine how encoding and memory evolution during treatment contribute to word learning outcomes by kindergarten children with specific language impairment (SLI). Method: Twenty-seven…
ERIC Educational Resources Information Center
Rice, Mabel L.; Redmond, Sean M.; Hoffman, Lesa
2006-01-01
Purpose: Although mean length of utterance (MLU) is a useful benchmark in studies of children with specific language impairment (SLI), some empirical and interpretive issues are unresolved. The authors report on 2 studies examining, respectively, the concurrent validity and temporal stability of MLU equivalency between children with SLI and…
NASA Astrophysics Data System (ADS)
Mascioli, Nora R.
Extreme temperatures, heat waves, heavy rainfall events, drought, and extreme air pollution events have adverse effects on human health, infrastructure, agriculture and economies. The frequency, magnitude and duration of these events are expected to change in the future in response to increasing greenhouse gases and decreasing aerosols, but future climate projections are uncertain. A significant portion of this uncertainty arises from uncertainty in the effects of aerosol forcing: to what extent were the effects from greenhouse gases masked by aerosol forcing over the historical observational period, and how much will decreases in aerosol forcing influence regional and global climate over the remainder of the 21st century? The observed frequency and intensity of extreme heat and precipitation events have increased in the U.S. over the latter half of the 20th century. Using aerosol only (AER) and greenhouse gas only (GHG) simulations from 1860 to 2005 in the GFDL CM3 chemistry-climate model, I parse apart the competing influences of aerosols and greenhouse gases on these extreme events. I find that small changes in extremes in the "all forcing" simulations reflect cancellations between the effects of increasing anthropogenic aerosols and greenhouse gases. In AER, extreme high temperatures and the number of days with temperatures above the 90th percentile decline over most of the U.S., while in GHG high temperature extremes increase over most of the U.S. The spatial response patterns in AER and GHG are significantly anti-correlated, suggesting a preferred regional mode of response that is largely independent of the type of forcing. Extreme precipitation over the eastern U.S. decreases in AER, particularly in winter, and increases over the eastern and central U.S. in GHG, particularly in spring. Over the 21 st century under the RCP8.5 emissions scenario, the patterns of extreme temperature and precipitation change associated with greenhouse gas forcing dominate. The temperature response pattern in AER and GHG is characterized by strong responses over the western U.S. and weak or opposite signed responses over the southeast U.S., raising the question of whether the observed U.S. "warming hole" could have a forced component. To address this question, I systematically examine observed seasonal temperature trends over all time periods of at least 10 years during 1901-2015. In the northeast and southern U.S., significant summertime cooling occurs from the early 1950s to the mid 1970s, which I partially attribute to increasing anthropogenic aerosol emissions (median fraction of the observed temperature trends explained is 0.69 and 0.17, respectively). In winter, the northeast and southern U.S. cool significantly from the early 1950s to the early 1990s, which I attribute to long-term phase changes in the North Atlantic Oscillation and the Pacific Decadal Oscillation. Rather than being a single phenomenon stemming from a single cause, both the warming hole and its dominant drivers vary by season, region, and time period. Finally, I examine historical and projected future changes in atmospheric stagnation. Stagnation, which is characterized by weak winds and an absence of precipitation, is a meteorological contributor to heat waves, extreme pollution, and drought. Using CM3, I show that regional stagnation trends over the historical period (1860-2005) are driven by changes in anthropogenic aerosol emissions, rather than rising greenhouse gases. In the northeastern and central United States, aerosol-induced changes in surface and upper level winds produce significant decreases in the number of stagnant summer days, while decreasing precipitation in the southeast US increases the number of stagnant summer days. Outside of the U.S., significant drying over eastern China in response to rising aerosol emissions contributed to increased stagnation during 1860-2005. Additionally, this region was found to be particularly sensitive to changes in local aerosol emissions, indicating that decreasing Chinese emissions in efforts to improve air quality will also decrease stagnation. In Europe, I find a dipole response pattern during the historical period wherein stagnation decreases over southern Europe and increases over northern Europe in response to global increases in aerosol emissions. In the future, declining aerosol emissions will likely lead to a reversal of the historical stagnation trends, with increasing greenhouse gases again playing a secondary role. Aerosols have a significant effect on a number of societally important extreme events, including heat waves, intense rainfall events, drought, and stagnation. Further, uncertainty in the strength of aerosol masking of historical greenhouse gas forcing is a significant source of spread in future climate projections. Quantifying these aerosol effects is therefore critical for our ability to accurately project and prepare for future changes in extreme events.
SITE-SPECIFIC MEASUREMENTS OF RESIDENTIAL RADON PROTECTION CATEGORY
The report describes a series of benchmark measurements of soil radon potential at seven Florida sites and compares the measurements with regional estimates of radon potential from the Florida radon protection map. The measurements and map were developed under the Florida Radon R...
Lightweight Specifications for Parallel Correctness
2012-12-05
Galenson, Benjamin Hindman, Thibaud Hottelier, Pallavi Joshi, Ben- jamin Lipshitz, Leo Meyerovich, Mayur Naik, Chang-Seo Park, and Philip Reames — many...violating executions. We discuss some of these errors in detail in the CHAPTER 5. SPECIFYING AND CHECKING SEMANTIC ATOMICITY 84 Benchmark Approx. LoC
Lepak, Jesse M.; Hooten, Mevin B.; Eagles-Smith, Collin A.; Tate, Michael T.; Lutz, Michelle A.; Ackerman, Joshua T.; Willacker, James J.; Jackson, Allyson K.; Evers, David C.; Wiener, James G.; Pritz, Colleen Flanagan; Davis, Jay
2016-01-01
Fish represent high quality protein and nutrient sources, but Hg contamination is ubiquitous in aquatic ecosystems and can pose health risks to fish and their consumers. Potential health risks posed to fish and humans by Hg contamination in fish were assessed in western Canada and the United States. A large compilation of inland fish Hg concentrations was evaluated in terms of potential health risk to the fish themselves, health risk to predatory fish that consume Hg contaminated fish, and to humans that consume Hg contaminated fish. The probability that a fish collected from a given location would exceed a Hg concentration benchmark relevant to a health risk was calculated. These exceedance probabilities and their associated uncertainties were characterized for fish of multiple size classes at multiple health-relevant benchmarks. The approach was novel and allowed for the assessment of the potential for deleterious health effects in fish and humans associated with Hg contamination in fish across this broad study area. Exceedance probabilities were relatively common at low Hg concentration benchmarks, particularly for fish in larger size classes. Specifically, median exceedances for the largest size classes of fish evaluated at the lowest Hg concentration benchmarks were 0.73 (potential health risks to fish themselves), 0.90 (potential health risk to predatory fish that consume Hg contaminated fish), and 0.97 (potential for restricted fish consumption by humans), but diminished to essentially zero at the highest benchmarks and smallest fish size classes. Exceedances of benchmarks are likely to have deleterious health effects on fish and limit recommended amounts of fish humans consume in western Canada and the United States. Results presented here are not intended to subvert or replace local fish Hg data or consumption advice, but provide a basis for identifying areas of potential health risk and developing more focused future research and monitoring efforts.
Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.
2001-01-01
We completed the formulation of the smoothness penalty functional this past quarter. We used a simplified procedure for estimating the statistics of the FCA solution spectral coefficients from the results of the unconstrained, low-truncation FCA (stopping criterion) solutions. During the current reporting period we have completed the calculation of GEOS-2 model-equivalent brightness temperatures for the 6.7 micron and 11 micron window channels used in the GOES imagery for all 10 cases from August 1999. These were simulated using the AER-developed Optimal Spectral Sampling (OSS) model.
1990-09-01
exper[ ence in u.sings both the KC-13iA/E/R d ,aboase model and other mat.hematival models. A staListical analysis of survey oz;ai,.,arons, will be...statistic. Consequently, differ- ences of opinion among respondents will be amplified. Summary The research methodology provide5 a sequential set of...Cost Accounting Direc- torate (AFLC/ACC). Though used for cost accounting pur- poses, the VAMOSC system has the capability of cross refer- encing a WUC
NASA Technical Reports Server (NTRS)
Rodriquez, Jose M.; Hu, Wenjie; Ko, Malcolm K. W.
1995-01-01
We proposed model-data intercomparison studies for UARS data. In the past three months, we have been working on constructing analysis tools to diagnose the UARS data. The 'Trajectory mapping' technique, which was developed by Morris (1994), is adaptable to generate synoptic maps of trace gas data from asynoptic observations. An in-house trajectory model (kinematic methods following Merrill et al., 1986 and Pickering et al., 1994) has been developed in AER under contract with NASA/ACMAP and the trajectory mapping tool has been applied to analyze UARS measurement.
1973-10-26
HSOKE BLOWING DUST OF CBS TOTAL AND OW SSTNOROR ANMOW AN R WITA FOOD AND A S D R AlT OB NO. OF DMK D OR" HAIL BS IR SOT TA DRIZZLE DRIZZLE SLEE TPRECP...DATA PROICESSING FRANCH USAF ETAC CEILING VERSUS VISIBILITY AIR 4EAT’AER SEPVIGF/lAC 41026 LJ-IAPArl AB THAILAND J-72 dAN * PERCENTAGE FREQUENCY OF...SAF AC O 4 5 (OL Al............,~ . -- ---------------------- 4 DATA PREJCESStNG FRANCH C IIGVSBLT UISAF ETAC C INGVERSUS VSBLT AIR *EATHER SEIRV!CF
1984-08-01
energy-savIng propulsion systems for tracked all- -terrain vehicles with extremely high mobility. Mong many proposed idea, Sthoeof hybrid -electric...propulsion system are dominant. Hybrid -electric propulsion system are hybrids In which at least one of the energy stores, sources or convertors can...Aer’teed b*.of I F~ Po ’edfJr* dema. 1046 Modern newly designed energy-saving hybrid -electric propulsion systems work on tracked all-terrain vehicles are
1983-10-01
a departure which will invariably be late, sometimes 4 to 6 hours late. Other, more minor difficulties, were encountered due to a lack of facility in...Lake and then through the connecting canals and streams to these other sites. Hence, the releases at Lake Alice probably played a minor role as a...philozeroide. (Mart.) Griseb. Cabomba Cabomba carolinian. Gray Cham Chara spp. Dckweed Lemna spp. Hydrifla Hydrill aer t Royle I iypphila Hygrophila
Diagnosis of brucellosis by using blood cultures.
Ruiz, J; Lorente, I; Pérez, J; Simarro, E; Martínez-Campos, L
1997-01-01
The performances of three blood culture systems, Hemoline performance diphasic medium (bioMérieux, Marcy l'Etoile, France), Bactec Plus Aerobic/F* (Becton Dickinson, Paramus, N.J.), and Vital Aer (bioMérieux), were compared for the diagnosis of 17 cases of brucellosis. By using a 5-day incubation protocol, positive results were 52.9, 82.4, and 11.8%, respectively. When the protocol was extended to 7 days, the results were 76.5, 94.1, and 47.1%, respectively. Bactec was the fastest system (P < 0.05). PMID:9276429
1979-03-01
mu s ’ uu d uuug h !ru ’ .t ’I I t ul ’ is ’ us i is - Jack Bi iuii l i _ iri: graslua tesi lu t t~ ~—l . t s ’s’c’i ’ iuuug .1I).it lis’i~u m it t...this ’ sps’e’ t t t t . - tds , mru cc il Smrategi c Au r-1~m emmu c hueel %lis’ui lu,’ I AS \\ It t ) , uueedh s t u t ’ t h ese ui utuu ’aer cis
2011-05-01
Mn, Fe, Co, Ni and Cu. Since metallic alloys for high temperature load bearing structures and thermal protection systems remain in high demand for aer...condition. These results indicate that the BCC crystal structure formed in both alloys during solidification is stable upon heating at least up to 1400 C... solidification (Fig. 5b). Higher magnification images reveal a dendritic structure in both alloys (Fig. 5c and d). Uneven Z contrast inside the grains indicates
Are Medications Involved in Vision and Intracranial Pressure Changes Seen in Spaceflight
NASA Technical Reports Server (NTRS)
Wotring, Virginia E.
2015-01-01
The Food and Drug Association Adverse Event Reports (FDA AER) from 2009-2011 were used to create a database from millions of known and suspected medication-related adverse events among the general public. Vision changes, sometimes associated with intracranial pressure changes (VIIP), have been noted in some long duration crewmembers. Changes in vision and blood pressure (which can subsequently affect intracranial pressure) are fairly common side effects of medications. The purpose of this study was to explore the possibility of medication involvement in crew VIIP symptoms.
Seeding for pervasively overlapping communities
NASA Astrophysics Data System (ADS)
Lee, Conrad; Reid, Fergal; McDaid, Aaron; Hurley, Neil
2011-06-01
In some social and biological networks, the majority of nodes belong to multiple communities. It has recently been shown that a number of the algorithms specifically designed to detect overlapping communities do not perform well in such highly overlapping settings. Here, we consider one class of these algorithms, those which optimize a local fitness measure, typically by using a greedy heuristic to expand a seed into a community. We perform synthetic benchmarks which indicate that an appropriate seeding strategy becomes more important as the extent of community overlap increases. We find that distinct cliques provide the best seeds. We find further support for this seeding strategy with benchmarks on a Facebook network and the yeast interactome.
Limitations of Community College Benchmarking and Benchmarks
ERIC Educational Resources Information Center
Bers, Trudy H.
2006-01-01
This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.
Driving personalized medicine: capturing maximum net present value and optimal return on investment.
Roth, Mollie; Keeling, Peter; Smart, Dave
2010-01-01
In order for personalized medicine to meet its potential future promise, a closer focus on the work being carried out today and the foundation it will provide for that future is imperative. While big picture perspectives of this still nascent shift in the drug-development process are important, it is more important that today's work on the first wave of targeted therapies is used to build specific benchmarking and financial models against which further such therapies may be more effectively developed. Today's drug-development teams need a robust tool to identify the exact drivers that will ensure the successful launch and rapid adoption of targeted therapies, and financial metrics to determine the appropriate resource levels to power those drivers. This special report will describe one such benchmarking and financial model that is specifically designed for the personalized medicine field and will explain how the use of this or similar models can help to capture the maximum net present value of targeted therapies and help to realize optimal return on investment.
Fan Noise Prediction with Applications to Aircraft System Noise Assessment
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Envia, Edmane; Burley, Casey L.
2009-01-01
This paper describes an assessment of current fan noise prediction tools by comparing measured and predicted sideline acoustic levels from a benchmark fan noise wind tunnel test. Specifically, an empirical method and newly developed coupled computational approach are utilized to predict aft fan noise for a benchmark test configuration. Comparisons with sideline noise measurements are performed to assess the relative merits of the two approaches. The study identifies issues entailed in coupling the source and propagation codes, as well as provides insight into the capabilities of the tools in predicting the fan noise source and subsequent propagation and radiation. In contrast to the empirical method, the new coupled computational approach provides the ability to investigate acoustic near-field effects. The potential benefits/costs of these new methods are also compared with the existing capabilities in a current aircraft noise system prediction tool. The knowledge gained in this work provides a basis for improved fan source specification in overall aircraft system noise studies.
Space station operating system study
NASA Technical Reports Server (NTRS)
Horn, Albert E.; Harwell, Morris C.
1988-01-01
The current phase of the Space Station Operating System study is based on the analysis, evaluation, and comparison of the operating systems implemented on the computer systems and workstations in the software development laboratory. Primary emphasis has been placed on the DEC MicroVMS operating system as implemented on the MicroVax II computer, with comparative analysis of the SUN UNIX system on the SUN 3/260 workstation computer, and to a limited extent, the IBM PC/AT microcomputer running PC-DOS. Some benchmark development and testing was also done for the Motorola MC68010 (VM03 system) before the system was taken from the laboratory. These systems were studied with the objective of determining their capability to support Space Station software development requirements, specifically for multi-tasking and real-time applications. The methodology utilized consisted of development, execution, and analysis of benchmark programs and test software, and the experimentation and analysis of specific features of the system or compilers in the study.
Stanislawski, L.V.
2009-01-01
The United States Geological Survey has been researching generalization approaches to enable multiple-scale display and delivery of geographic data. This paper presents automated methods to prune network and polygon features of the United States high-resolution National Hydrography Dataset (NHD) to lower resolutions. Feature-pruning rules, data enrichment, and partitioning are derived from knowledge of surface water, the NHD model, and associated feature specification standards. Relative prominence of network features is estimated from upstream drainage area (UDA). Network and polygon features are pruned by UDA and NHD reach code to achieve a drainage density appropriate for any less detailed map scale. Data partitioning maintains local drainage density variations that characterize the terrain. For demonstration, a 48 subbasin area of 1:24 000-scale NHD was pruned to 1:100 000-scale (100 K) and compared to a benchmark, the 100 K NHD. The coefficient of line correspondence (CLC) is used to evaluate how well pruned network features match the benchmark network. CLC values of 0.82 and 0.77 result from pruning with and without partitioning, respectively. The number of polygons that remain after pruning is about seven times that of the benchmark, but the area covered by the polygons that remain after pruning is only about 10% greater than the area covered by benchmark polygons. ?? 2009.
NASA Astrophysics Data System (ADS)
Rodriguez, Tony F.; Cushman, David A.
2003-06-01
With the growing commercialization of watermarking techniques in various application scenarios it has become increasingly important to quantify the performance of watermarking products. The quantification of relative merits of various products is not only essential in enabling further adoption of the technology by society as a whole, but will also drive the industry to develop testing plans/methodologies to ensure quality and minimize cost (to both vendors & customers.) While the research community understands the theoretical need for a publicly available benchmarking system to quantify performance, there has been less discussion on the practical application of these systems. By providing a standard set of acceptance criteria, benchmarking systems can dramatically increase the quality of a particular watermarking solution, validating the product performances if they are used efficiently and frequently during the design process. In this paper we describe how to leverage specific design of experiments techniques to increase the quality of a watermarking scheme, to be used with the benchmark tools being developed by the Ad-Hoc Watermark Verification Group. A Taguchi Loss Function is proposed for an application and orthogonal arrays used to isolate optimal levels for a multi-factor experimental situation. Finally, the results are generalized to a population of cover works and validated through an exhaustive test.
Assessing Ecosystem Model Performance in Semiarid Systems
NASA Astrophysics Data System (ADS)
Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.
2017-12-01
In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.
Experimental benchmark of kinetic simulations of capacitively coupled plasmas in molecular gases
NASA Astrophysics Data System (ADS)
Donkó, Z.; Derzsi, A.; Korolov, I.; Hartmann, P.; Brandt, S.; Schulze, J.; Berger, B.; Koepke, M.; Bruneau, B.; Johnson, E.; Lafleur, T.; Booth, J.-P.; Gibson, A. R.; O'Connell, D.; Gans, T.
2018-01-01
We discuss the origin of uncertainties in the results of numerical simulations of low-temperature plasma sources, focusing on capacitively coupled plasmas. These sources can be operated in various gases/gas mixtures, over a wide domain of excitation frequency, voltage, and gas pressure. At low pressures, the non-equilibrium character of the charged particle transport prevails and particle-based simulations become the primary tools for their numerical description. The particle-in-cell method, complemented with Monte Carlo type description of collision processes, is a well-established approach for this purpose. Codes based on this technique have been developed by several authors/groups, and have been benchmarked with each other in some cases. Such benchmarking demonstrates the correctness of the codes, but the underlying physical model remains unvalidated. This is a key point, as this model should ideally account for all important plasma chemical reactions as well as for the plasma-surface interaction via including specific surface reaction coefficients (electron yields, sticking coefficients, etc). In order to test the models rigorously, comparison with experimental ‘benchmark data’ is necessary. Examples will be given regarding the studies of electron power absorption modes in O2, and CF4-Ar discharges, as well as on the effect of modifications of the parameters of certain elementary processes on the computed discharge characteristics in O2 capacitively coupled plasmas.
Performance Monitoring of Distributed Data Processing Systems
NASA Technical Reports Server (NTRS)
Ojha, Anand K.
2000-01-01
Test and checkout systems are essential components in ensuring safety and reliability of aircraft and related systems for space missions. A variety of systems, developed over several years, are in use at the NASA/KSC. Many of these systems are configured as distributed data processing systems with the functionality spread over several multiprocessor nodes interconnected through networks. To be cost-effective, a system should take the least amount of resource and perform a given testing task in the least amount of time. There are two aspects of performance evaluation: monitoring and benchmarking. While monitoring is valuable to system administrators in operating and maintaining, benchmarking is important in designing and upgrading computer-based systems. These two aspects of performance evaluation are the foci of this project. This paper first discusses various issues related to software, hardware, and hybrid performance monitoring as applicable to distributed systems, and specifically to the TCMS (Test Control and Monitoring System). Next, a comparison of several probing instructions are made to show that the hybrid monitoring technique developed by the NIST (National Institutes for Standards and Technology) is the least intrusive and takes only one-fourth of the time taken by software monitoring probes. In the rest of the paper, issues related to benchmarking a distributed system have been discussed and finally a prescription for developing a micro-benchmark for the TCMS has been provided.
Fingerprinting sea-level variations in response to continental ice loss: a benchmark exercise
NASA Astrophysics Data System (ADS)
Barletta, Valentina R.; Spada, Giorgio; Riva, Riccardo E. M.; James, Thomas S.; Simon, Karen M.; van der Wal, Wouter; Martinec, Zdenek; Klemann, Volker; Olsson, Per-Anders; Hagedoorn, Jan; Stocchi, Paolo; Vermeersen, Bert
2013-04-01
Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying Glacial Isostatic Adjustment (GIA) can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Here we present the results of a benchmark exercise of independently developed codes designed to solve the SLE. The study involves predictions of current sea level changes due to present-day ice mass loss. In spite of the differences in the methods employed, the comparison shows that a significant number of GIA modellers can reproduce their sea-level computations within 2% for well defined, large-scale present-day ice mass changes. Smaller and more detailed loads need further and dedicated benchmarking and high resolution computation. This study shows how the details of the implementation and the inputs specifications are an important, and often underappreciated, aspect. Hence this represents a step toward the assessment of reliability of sea level projections obtained with benchmarked SLE codes.
U.S. Solar Photovoltaic System Cost Benchmark: Q1 2017
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Ran; Feldman, David; Margolis, Robert
This report benchmarks U.S. solar photovoltaic (PV) system installed costs as of the first quarter of 2017 (Q1 2017). We use a bottom-up methodology, accounting for all system and projectdevelopment costs incurred during the installation to model the costs for residential, commercial, and utility-scale systems. In general, we attempt to model the typical installation techniques and business operations from an installed-cost perspective. Costs are represented from the perspective of the developer/installer; thus, all hardware costs represent the price at which components are purchased by the developer/installer, not accounting for preexisting supply agreements or other contracts. Importantly, the benchmark also representsmore » the sales price paid to the installer; therefore, it includes profit in the cost of the hardware, 1 along with the profit the installer/developer receives, as a separate cost category. However, it does not include any additional net profit, such as a developer fee or price gross-up, which is common in the marketplace. We adopt this approach owing to the wide variation in developer profits in all three sectors, where project pricing is highly dependent on region and project specifics such as local retail electricity rate structures, local rebate and incentive structures, competitive environment, and overall project or deal structures. Finally, our benchmarks are national averages weighted by state installed capacities.« less
Gelli, Aulo; Suwa, Yuko
2014-09-01
School feeding programs have been a key response to the recent food and economic crises and function to some degree in nearly every country in the world. However, school feeding programs are complex and exhibit different, context-specific models or configurations. To examine the trade-offs, including the costs and cost-efficiency, of an innovative cluster kitchen implementation model in Bangladesh using a standardized framework. A supply chain framework based on international standards was used to provide benchmarks for meaningful comparisons across models. Implementation processes specific to the program in Bangladesh were mapped against this reference to provide a basis for standardized performance measures. Qualitative and quantitative data on key metrics were collected retrospectively using semistructured questionnaires following an ingredients approach, including both financial and economic costs. Costs were standardized to a 200-feeding-day year and 700 kcal daily. The cluster kitchen model had similarities with the semidecentralized model and outsourced models in the literature, the main differences involving implementation scale, scale of purchasing volumes, and frequency of purchasing. Two important features stand out in terms of implementation: the nutritional quality of meals and the level of community involvement. The standardized full cost per child per year was US$110. Despite the nutritious content of the meals, the overall cost-efficiency in cost per nutrient output was lower than the benchmark for centralized programs, due mainly to support and start-up costs. Cluster kitchens provide an example of an innovative implementation model, combining an emphasis on quality meal delivery with strong community engagement. However, the standardized costs-per child were above the average benchmarks for both low-and middle-income countries. In contrast to the existing benchmark data from mature, centralized models, the main cost drivers of the program were associated with support and start-up activities. Further research is required to better understand changes in cost drivers as programs mature.
Bereskie, Ty; Haider, Husnain; Rodriguez, Manuel J; Sadiq, Rehan
2017-08-23
Traditional approaches for benchmarking drinking water systems are binary, based solely on the compliance and/or non-compliance of one or more water quality performance indicators against defined regulatory guidelines/standards. The consequence of water quality failure is dependent on location within a water supply system as well as time of the year (i.e., season) with varying levels of water consumption. Conventional approaches used for water quality comparison purposes fail to incorporate spatiotemporal variability and degrees of compliance and/or non-compliance. This can lead to misleading or inaccurate performance assessment data used in the performance benchmarking process. In this research, a hierarchical risk-based water quality performance benchmarking framework is proposed to evaluate small drinking water systems (SDWSs) through cross-comparison amongst similar systems. The proposed framework (R WQI framework) is designed to quantify consequence associated with seasonal and location-specific water quality issues in a given drinking water supply system to facilitate more efficient decision-making for SDWSs striving for continuous performance improvement. Fuzzy rule-based modelling is used to address imprecision associated with measuring performance based on singular water quality guidelines/standards and the uncertainties present in SDWS operations and monitoring. This proposed R WQI framework has been demonstrated using data collected from 16 SDWSs in Newfoundland and Labrador and Quebec, Canada, and compared to the Canadian Council of Ministers of the Environment WQI, a traditional, guidelines/standard-based approach. The study found that the R WQI framework provides an in-depth state of water quality and benchmarks SDWSs more rationally based on the frequency of occurrence and consequence of failure events.
Electric load shape benchmarking for small- and medium-sized commercial buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Xuan; Hong, Tianzhen; Chen, Yixing
Small- and medium-sized commercial buildings owners and utility managers often look for opportunities for energy cost savings through energy efficiency and energy waste minimization. However, they currently lack easy access to low-cost tools that help interpret the massive amount of data needed to improve understanding of their energy use behaviors. Benchmarking is one of the techniques used in energy audits to identify which buildings are priorities for an energy analysis. Traditional energy performance indicators, such as the energy use intensity (annual energy per unit of floor area), consider only the total annual energy consumption, lacking consideration of the fluctuation ofmore » energy use behavior over time, which reveals the time of use information and represents distinct energy use behaviors during different time spans. To fill the gap, this study developed a general statistical method using 24-hour electric load shape benchmarking to compare a building or business/tenant space against peers. Specifically, the study developed new forms of benchmarking metrics and data analysis methods to infer the energy performance of a building based on its load shape. We first performed a data experiment with collected smart meter data using over 2,000 small- and medium-sized businesses in California. We then conducted a cluster analysis of the source data, and determined and interpreted the load shape features and parameters with peer group analysis. Finally, we implemented the load shape benchmarking feature in an open-access web-based toolkit (the Commercial Building Energy Saver) to provide straightforward and practical recommendations to users. The analysis techniques were generic and flexible for future datasets of other building types and in other utility territories.« less
Multi-Complementary Model for Long-Term Tracking
Zhang, Deng; Zhang, Junchang; Xia, Chenyang
2018-01-01
In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed. PMID:29425170
Impact of quality circles for improvement of asthma care: results of a randomized controlled trial
Schneider, Antonius; Wensing, Michel; Biessecker, Kathrin; Quinzler, Renate; Kaufmann-Kolle, Petra; Szecsenyi, Joachim
2008-01-01
Rationale and aims Quality circles (QCs) are well established as a means of aiding doctors. New quality improvement strategies include benchmarking activities. The aim of this paper was to evaluate the efficacy of QCs for asthma care working either with general feedback or with an open benchmark. Methods Twelve QCs, involving 96 general practitioners, were organized in a randomized controlled trial. Six worked with traditional anonymous feedback and six with an open benchmark; both had guided discussion from a trained moderator. Forty-three primary care practices agreed to give out questionnaires to patients to evaluate the efficacy of QCs. Results A total of 256 patients participated in the survey, of whom 185 (72.3%) responded to the follow-up 1 year later. Use of inhaled steroids at baseline was high (69%) and self-management low (asthma education 27%, individual emergency plan 8%, and peak flow meter at home 21%). Guideline adherence in drug treatment increased (P = 0.19), and asthma steps improved (P = 0.02). Delivery of individual emergency plans increased (P = 0.008), and unscheduled emergency visits decreased (P = 0.064). There was no change in asthma education and peak flow meter usage. High medication guideline adherence was associated with reduced emergency visits (OR 0.24; 95% CI 0.07–0.89). Use of theophylline was associated with hospitalization (OR 7.1; 95% CI 1.5–34.3) and emergency visits (OR 4.9; 95% CI 1.6–14.7). There was no difference between traditional and benchmarking QCs. Conclusions Quality circles working with individualized feedback are effective at improving asthma care. The trial may have been underpowered to detect specific benchmarking effects. Further research is necessary to evaluate strategies for improving the self-management of asthma patients. PMID:18093108
Electric load shape benchmarking for small- and medium-sized commercial buildings
Luo, Xuan; Hong, Tianzhen; Chen, Yixing; ...
2017-07-28
Small- and medium-sized commercial buildings owners and utility managers often look for opportunities for energy cost savings through energy efficiency and energy waste minimization. However, they currently lack easy access to low-cost tools that help interpret the massive amount of data needed to improve understanding of their energy use behaviors. Benchmarking is one of the techniques used in energy audits to identify which buildings are priorities for an energy analysis. Traditional energy performance indicators, such as the energy use intensity (annual energy per unit of floor area), consider only the total annual energy consumption, lacking consideration of the fluctuation ofmore » energy use behavior over time, which reveals the time of use information and represents distinct energy use behaviors during different time spans. To fill the gap, this study developed a general statistical method using 24-hour electric load shape benchmarking to compare a building or business/tenant space against peers. Specifically, the study developed new forms of benchmarking metrics and data analysis methods to infer the energy performance of a building based on its load shape. We first performed a data experiment with collected smart meter data using over 2,000 small- and medium-sized businesses in California. We then conducted a cluster analysis of the source data, and determined and interpreted the load shape features and parameters with peer group analysis. Finally, we implemented the load shape benchmarking feature in an open-access web-based toolkit (the Commercial Building Energy Saver) to provide straightforward and practical recommendations to users. The analysis techniques were generic and flexible for future datasets of other building types and in other utility territories.« less
International health IT benchmarking: learning from cross-country comparisons.
Zelmer, Jennifer; Ronchi, Elettra; Hyppönen, Hannele; Lupiáñez-Villanueva, Francisco; Codagnone, Cristiano; Nøhr, Christian; Huebner, Ursula; Fazzalari, Anne; Adler-Milstein, Julia
2017-03-01
To pilot benchmark measures of health information and communication technology (ICT) availability and use to facilitate cross-country learning. A prior Organization for Economic Cooperation and Development-led effort involving 30 countries selected and defined functionality-based measures for availability and use of electronic health records, health information exchange, personal health records, and telehealth. In this pilot, an Organization for Economic Cooperation and Development Working Group compiled results for 38 countries for a subset of measures with broad coverage using new and/or adapted country-specific or multinational surveys and other sources from 2012 to 2015. We also synthesized country learnings to inform future benchmarking. While electronic records are widely used to store and manage patient information at the point of care-all but 2 pilot countries reported use by at least half of primary care physicians; many had rates above 75%-patient information exchange across organizations/settings is less common. Large variations in the availability and use of telehealth and personal health records also exist. Pilot participation demonstrated interest in cross-national benchmarking. Using the most comparable measures available to date, it showed substantial diversity in health ICT availability and use in all domains. The project also identified methodological considerations (e.g., structural and health systems issues that can affect measurement) important for future comparisons. While health policies and priorities differ, many nations aim to increase access, quality, and/or efficiency of care through effective ICT use. By identifying variations and describing key contextual factors, benchmarking offers the potential to facilitate cross-national learning and accelerate the progress of individual countries. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Introduction to the IWA task group on biofilm modeling.
Noguera, D R; Morgenroth, E
2004-01-01
An International Water Association (IWA) Task Group on Biofilm Modeling was created with the purpose of comparatively evaluating different biofilm modeling approaches. The task group developed three benchmark problems for this comparison, and used a diversity of modeling techniques that included analytical, pseudo-analytical, and numerical solutions to the biofilm problems. Models in one, two, and three dimensional domains were also compared. The first benchmark problem (BM1) described a monospecies biofilm growing in a completely mixed reactor environment and had the purpose of comparing the ability of the models to predict substrate fluxes and concentrations for a biofilm system of fixed total biomass and fixed biomass density. The second problem (BM2) represented a situation in which substrate mass transport by convection was influenced by the hydrodynamic conditions of the liquid in contact with the biofilm. The third problem (BM3) was designed to compare the ability of the models to simulate multispecies and multisubstrate biofilms. These three benchmark problems allowed identification of the specific advantages and disadvantages of each modeling approach. A detailed presentation of the comparative analyses for each problem is provided elsewhere in these proceedings.
Donahue, Suzanne; DiBlasi, Robert M; Thomas, Karen
2018-02-02
To examine the practice of nebulizer cool mist blow-by oxygen administered to spontaneously breathing postanesthesia care unit (PACU) pediatric patients during Phase one recovery. Existing evidence was evaluated. Informal benchmarking documented practices in peer organizations. An in vitro study was then conducted to simulate clinical practice and determine depth and amount of airway humidity delivery with blow-by oxygen. Informal benchmarking information was obtained by telephone interview. Using a three-dimensional printed simulation model of the head connected to a breathing lung simulator, depth and amount of moisture delivery in the respiratory tree were measured. Evidence specific to PACU administration of cool mist blow-by oxygen was limited. Informal benchmarking revealed that routine cool mist oxygenated blow-by administration was not widely practiced. The laboratory experiment revealed minimal moisture reaching the mid-tracheal area of the simulated airway model. Routine use of oxygenated cool mist in spontaneously breathing pediatric PACU patients is not supported. Copyright © 2017 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry
1998-01-01
This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.
U.S. Solar Photovoltaic System Cost Benchmark: Q1 2017
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Ran; Feldman, David J.; Margolis, Robert M.
NREL has been modeling U.S. photovoltaic (PV) system costs since 2009. This year, our report benchmarks costs of U.S. solar PV for residential, commercial, and utility-scale systems built in the first quarter of 2017 (Q1 2017). Costs are represented from the perspective of the developer/installer, thus all hardware costs represent the price at which components are purchased by the developer/installer, not accounting for preexisting supply agreements or other contracts. Importantly, the benchmark this year (2017) also represents the sales price paid to the installer; therefore, it includes profit in the cost of the hardware, along with the profit the installer/developermore » receives, as a separate cost category. However, it does not include any additional net profit, such as a developer fee or price gross-up, which are common in the marketplace. We adopt this approach owing to the wide variation in developer profits in all three sectors, where project pricing is highly dependent on region and project specifics such as local retail electricity rate structures, local rebate and incentive structures, competitive environment, and overall project or deal structures.« less
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Real-time classification and sensor fusion with a spiking deep belief network
O'Connor, Peter; Neil, Daniel; Liu, Shih-Chii; Delbruck, Tobi; Pfeiffer, Michael
2013-01-01
Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input. PMID:24115919
Bonnesen, Trine Gade; Winther, Jeanette F; Asdahl, Peter H; de Fine Licht, Sofie; Gudmundsdottir, Thorgerdur; Sällfors Holmqvist, Anna; Madanat-Harjuoja, Laura-Maria; Tryggvadottir, Laufey; Wesenberg, Finn; Birn, Henrik; Olsen, Jørgen H; Hasle, Henrik
2016-09-01
Childhood cancer has been associated with long-term risk of urinary tract diseases, but risk patterns remain to be comprehensively investigated. We analysed the lifetime risk of urinary tract diseases in survivors of childhood cancer in the Nordic countries. We identified 32,519 one-year survivors of childhood cancer diagnosed since the 1940s and 1950s in the five Nordic cancer registries and selected 211,156 population comparisons of a corresponding age, sex, and country of residence from the national population registries. To obtain information on all first-time hospitalizations for a urinary tract disease, we linked all study subjects to the national hospital registry of each country. Relative risks (RRs) and absolute excess risks (AERs) and associated 95% confidence intervals (CIs) for urinary tract diseases among cancer survivors were calculated with the appropriate morbidity rates among comparisons as reference. We observed 1645 childhood cancer survivors ever hospitalized for urinary tract disease yielding an RR of 2.5 (95% CI 2.4-2.7) and an AER of 229 (95% CI 210-248) per 100,000 person-years. The cumulative risk at age 60 was 22% in cancer survivors and 10% in comparisons. Infections of the urinary system and chronic kidney disease showed the highest excess risks, whereas survivors of neuroblastoma, hepatic and renal tumours experienced the highest RRs. Survivors of childhood cancer had an excess risk of urinary tract diseases and for most diseases the risk remained elevated throughout life. The highest risks occurred following therapy of childhood abdominal tumours. Copyright © 2016 Elsevier Ltd. All rights reserved.
Changes in Conjunctival Hemodynamics Predict Albuminuria in Sickle Cell Nephropathy
Valeshabad, Ali Kord; Wanek, Justin; Saraf, Santosh L.; Gaynes, Bruce I.; Gordeuk, Victor R.; Molokie, Robert E.; Shahidi, Mahnaz
2015-01-01
Background Albuminuria is an early manifestation of deterioration in renal function in subjects with sickle cell disease (SCD). Hyperfiltration may be an early mechanism for kidney damage in SCD. The purpose of the current study was to determine the association between conjunctival hemodynamics and albuminuria in SCD subjects with preserved glomerular filtration rate. Methods Conjunctival microcirculation imaging was performed to measure conjunctival diameter (D) and axial blood velocity (V) in 35 SCD and 10 healthy control subjects. Albuminuria, defined as albumin excretion ratio (AER) was obtained from the medical charts. Based on the 95% confidence interval of conjunctival V in control subjects (0.40 - 0.60 mm/s), SCD subjects were allocated to three groups: V1 < 0.40 mm/s (N = 7), V2 of 0.40 – 0.60 mm/s (N = 18), and V3 ≥ 0.60 mm/s (N = 10). Results Mean log(AER) measurements in the V1, V2, and V3 groups were 1.08 ± 0.67 mg/g creatinine, 1.39 ± 0.59 mg/g creatinine, and 2.00 ± 0.91 mg/g creatinine, respectively, and followed a positive linear trend from the V1 to V3 groups (p = 0.01). By multivariate linear regression analysis, conjunctival V significantly correlated with albuminuria (p = 0.01) independent of age, blood pressure, α-thalassemia, hematocrit, white blood cell count, and lactate dehydrogenase concentration. Conclusions Increased conjunctival velocity is associated with albuminuria in sickle cell subjects. Assessment of conjunctival microvascular hemodynamics may improve our understanding of the pathophysiology and clinical course of sickle cell nephropathy. PMID:26278102
Inverse Design of Low-Boom Supersonic Concepts Using Reversed Equivalent-Area Targets
NASA Technical Reports Server (NTRS)
Li, Wu; Rallabhand, Sriam
2011-01-01
A promising path for developing a low-boom configuration is a multifidelity approach that (1) starts from a low-fidelity low-boom design, (2) refines the low-fidelity design with computational fluid dynamics (CFD) equivalent-area (Ae) analysis, and (3) improves the design with sonic-boom analysis by using CFD off-body pressure distributions. The focus of this paper is on the third step of this approach, in which the design is improved with sonic-boom analysis through the use of CFD calculations. A new inverse design process for off-body pressure tailoring is formulated and demonstrated with a low-boom supersonic configuration that was developed by using the mixed-fidelity design method with CFD Ae analysis. The new inverse design process uses the reverse propagation of the pressure distribution (dp/p) from a mid-field location to a near-field location, converts the near-field dp/p into an equivalent-area distribution, generates a low-boom target for the reversed equivalent area (Ae,r) of the configuration, and modifies the configuration to minimize the differences between the configuration s Ae,r and the low-boom target. The new inverse design process is used to modify a supersonic demonstrator concept for a cruise Mach number of 1.6 and a cruise weight of 30,000 lb. The modified configuration has a fully shaped ground signature that has a perceived loudness (PLdB) value of 78.5, while the original configuration has a partially shaped aft signature with a PLdB of 82.3.
Yue, Zhihua; Shi, Jinhai; Jiang, Pengli; Sun, He
2014-11-01
Little is known about the effects of drug-drug interactions between valacyclovir and non-steroidal anti-inflammatory drugs (NSAIDs). In this study, we analysed the adverse event 'acute kidney injury (AKI)' resulting from a possible interaction between loxoprofen (a non-selective NSAID) and valacyclovir in reports received by FDA Adverse Event Reporting System (AERS) database between January 2004 and June 2012. Adverse event reports of elderly patients aged ≥65 years old were included in the study. Exposure categories were divided into three index groups (only valacyclovir or loxoprofen was used, and both drugs were concomitantly used) and a reference group (neither valacyclovir nor loxoprofen were used). Case/non-case AKI reports associated with these drugs were recorded and analysed by the reporting odds ratio (ROR). In total, 447 002 reports were included in the study. The ROR, adjusted for year of reporting, age and sex, for an AKI in elderly patients who used only valacyclovir or loxoprofen compared with elderly patients who used neither valacyclovir nor loxoprofen was 4.6 (95%CI: 4.1-5.2) and 1.4 (95%CI: 1.2-1.6), respectively, while the adjusted ROR was 26.0 (95%CI: 19.2-35.3) when both drugs were concomitantly used. Case reports in AERS are suggestive that interactions between valacyclovir and loxoprofen resulting in AKI may occur, while this association needs to be analysed by other methods in more detail in order to determine the real strength of the relationship. Copyright © 2014 John Wiley & Sons, Ltd.
Sorption, desorption, and surface oxidative fate of nicotine.
Petrick, Lauren; Destaillats, Hugo; Zouev, Irena; Sabach, Sara; Dubowski, Yael
2010-09-21
Nicotine dynamics in an indoor environment can be greatly affected by building parameters (e.g. relative humidity (RH), air exchange rate (AER), and presence of ozone), as well as surface parameters (e.g. surface area (SA) and polarity). To better understand the indoor fate of nicotine, these parameter effects on its sorption, desorption, and oxidation rates were investigated on model indoor surfaces that included fabrics, wallboard paper, and wood materials. Nicotine sorption under dry conditions was enhanced by higher SA and higher polarity of the substrate. Interestingly, nicotine sorption to cotton and nylon was facilitated by increased RH, while sorption to polyester was hindered by it. Desorption was affected by RH, AER, and surface type. Heterogeneous nicotine-ozone reaction was investigated by Fourier transform infrared spectrometry with attenuated total reflection (FTIR-ATR), and revealed a pseudo first-order surface reaction rate of 0.035 +/- 0.015 min(-1) (at [O(3)] = 6 +/- 0.3 x 10(15) molecules cm(-3)) that was partially inhibited at high RH. Extrapolation to a lower ozone level ([O(3)] = 42 ppb) showed oxidation on the order of 10(-5) min(-1) corresponding to a half-life of 1 week. In addition, similar surface products were identified in dry and high RH using gas chromatography-mass spectrometry (GC-MS). However, FTIR analysis revealed different product spectra for these conditions, suggesting additional unidentified products and association with surface water. Knowing the indoor fate of condensed and gas phase nicotine and its oxidation products will provide a better understanding of nicotine's impact on personal exposures as well as overall indoor air quality.
Zheng, Shuwei; Ng, Tong Yong; Li, Huihua; Tan, Ai Ling; Tan, Thuan Tong; Tan, Ban Hock
2016-01-01
Mortality for candidemia ranges from 15% to 35%. Current guidelines recommend inoculating blood into three aerobic and three anaerobic blood culture bottles when candidemia is suspected, without mention of a fungal blood culture bottle. To determine the value of the BACTEC Myco/F Lytic blood culture media in the diagnosis of fungemia. A two-year retrospective cross-sectional study was performed for patients who had fungemia with submitted BACTEC Plus Aerobic/F (Aer), BACTEC Plus Anaerobic/F (Anaer) or Myco/F Lytic (Myco) blood culture bottles. The detection rate of fungemia was 77.4% in 93 patients with contemporaneously submitted blood culture bottles when limited to only Aer/Anaer culture results. The detection rate improved significantly with the addition of the Myco culture bottle results (p<0.0001). A logistic regression model showed that Myco culture bottle submissions were less useful for patients with appropriate anti-fungal therapy administered within 48 hours [OR = 0.18, 95% CI = (0.06, 0.49), p = 0.001] and those with fungal growth detected within 48 hours [OR = 0.33, 95% CI = (0.12, 0.89), p = 0.001]. Among a subset of patients with concordant blood culture results, those with Myco culture bottles submission allowed earlier fungal detection and speciation by at least one day in 27.5% and 25.0% of the cases respectively. Our study highlights the importance of a dedicated fungal blood culture when fungemia is clinically suspected. Nearly a quarter of fungemias may be missed if a fungal blood culture is not performed.
Chen, Shao-Yu; Dehart, Deborah B; Sulik, Kathleen K
2004-08-01
Based on previous in vitro studies that have illustrated prevention of ethanol-induced cell death by antioxidants, using an in vivo model, we have tested the anti-teratogenic potential of a potent synthetic superoxide dismutase plus catalase mimetic, EUK-134. The developing limb of C57BL/6J mice, which is sensitive to ethanol-induced reduction defects, served as the model system. On their ninth day of pregnancy, C57BL/6J mice were administered ethanol (two intraperitoneal doses of 2.9 g/kg given 4 h apart) alone or in combination with EUK-134 (two doses of 10 mg/kg). Pregnant control mice were similarly treated with either vehicle or EUK-134, alone. Within 15 h of the initial ethanol exposure, excessive apoptotic cell death was observed in the apical ectodermal ridge (AER) of the newly forming forelimb buds. Forelimb defects, including postaxial ectrodactyly, metacarpal, and ulnar deficiencies, occurred in 67.3% of the ethanol-exposed fetuses that were examined at 18 days of gestation. The right forelimbs were preferentially affected. No limb malformations were observed in control fetuses. Cell death in the AER of embryos concurrently exposed to ethanol and EUK-134 was notably reduced compared with that in embryos from ethanol-treated dams. Additionally, the antioxidant treatment reduced the incidence of forelimb malformations to 35.9%. This work illustrates that antioxidants can significantly improve the adverse developmental outcome that results from ethanol exposure in utero, diminishing the incidence and severity of major malformations that result from exposure to this important human teratogen.
Management characteristics of beef cattle production in the western United States
USDA-ARS?s Scientific Manuscript database
A comprehensive life cycle assessment (LCA) of beef in the United States is being conducted to provide benchmarks and identify opportunities for improvement of the beef value chain. Region-specific data are being collected to accurately characterize cattle production practices. This study reports pr...
HMM-ModE: implementation, benchmarking and validation with HMMER3
2014-01-01
Background HMM-ModE is a computational method that generates family specific profile HMMs using negative training sequences. The method optimizes the discrimination threshold using 10 fold cross validation and modifies the emission probabilities of profiles to reduce common fold based signals shared with other sub-families. The protocol depends on the program HMMER for HMM profile building and sequence database searching. The recent release of HMMER3 has improved database search speed by several orders of magnitude, allowing for the large scale deployment of the method in sequence annotation projects. We have rewritten our existing scripts both at the level of parsing the HMM profiles and modifying emission probabilities to upgrade HMM-ModE using HMMER3 that takes advantage of its probabilistic inference with high computational speed. The method is benchmarked and tested on GPCR dataset as an accurate and fast method for functional annotation. Results The implementation of this method, which now works with HMMER3, is benchmarked with the earlier version of HMMER, to show that the effect of local-local alignments is marked only in the case of profiles containing a large number of discontinuous match states. The method is tested on a gold standard set of families and we have reported a significant reduction in the number of false positive hits over the default HMM profiles. When implemented on GPCR sequences, the results showed an improvement in the accuracy of classification compared with other methods used to classify the familyat different levels of their classification hierarchy. Conclusions The present findings show that the new version of HMM-ModE is a highly specific method used to differentiate between fold (superfamily) and function (family) specific signals, which helps in the functional annotation of protein sequences. The use of modified profile HMMs of GPCR sequences provides a simple yet highly specific method for classification of the family, being able to predict the sub-family specific sequences with high accuracy even though sequences share common physicochemical characteristics between sub-families. PMID:25073805
NASA Technical Reports Server (NTRS)
Bell, Michael A.
1999-01-01
Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.
PID controller tuning using metaheuristic optimization algorithms for benchmark problems
NASA Astrophysics Data System (ADS)
Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.
2017-11-01
This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.
DRG benchmarking study establishes national coding norms.
Vaul, J H
1998-05-01
With the increase in fraud and abuse investigations, healthcare financial managers should examine their organization's medical record coding procedures. The Federal government and third-party payers are looking specifically for improper billing of outpatient services, unbundling of procedures to increase payment, assigning higher-paying DRG codes for inpatient claims, and other abuses. A recent benchmarking study of Medicare Provider Analysis and Review (MEDPAR) data has established national norms for hospital coding and case mix based on DRGs and has revealed the majority of atypical coding cases fall into six DRG pairs. Organizations with a greater percentage of atypical cases--those more likely to be scrutinized by Federal investigators--will want to conduct suitable review and be sure appropriate documentation exists to justify the coding.
Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.
2015-07-01
This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in themore » neutron field are reported.« less
Toward real-time performance benchmarks for Ada
NASA Technical Reports Server (NTRS)
Clapp, Russell M.; Duchesneau, Louis; Volz, Richard A.; Mudge, Trevor N.; Schultze, Timothy
1986-01-01
The issue of real-time performance measurements for the Ada programming language through the use of benchmarks is addressed. First, the Ada notion of time is examined and a set of basic measurement techniques are developed. Then a set of Ada language features believed to be important for real-time performance are presented and specific measurement methods discussed. In addition, other important time related features which are not explicitly part of the language but are part of the run-time related features which are not explicitly part of the language but are part of the run-time system are also identified and measurement techniques developed. The measurement techniques are applied to the language and run-time system features and the results are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W. II; Tsao, C.L.
1996-06-01
This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less
Benchmarking in emergency health systems.
Kennedy, Marcus P; Allen, Jacqueline; Allen, Greg
2002-12-01
This paper discusses the role of benchmarking as a component of quality management. It describes the historical background of benchmarking, its competitive origin and the requirement in today's health environment for a more collaborative approach. The classical 'functional and generic' types of benchmarking are discussed with a suggestion to adopt a different terminology that describes the purpose and practicalities of benchmarking. Benchmarking is not without risks. The consequence of inappropriate focus and the need for a balanced overview of process is explored. The competition that is intrinsic to benchmarking is questioned and the negative impact it may have on improvement strategies in poorly performing organizations is recognized. The difficulty in achieving cross-organizational validity in benchmarking is emphasized, as is the need to scrutinize benchmarking measures. The cost effectiveness of benchmarking projects is questioned and the concept of 'best value, best practice' in an environment of fixed resources is examined.
Expanded Outreach at Clemson University. A Case Study.
ERIC Educational Resources Information Center
Bennett, A. Wayne
This paper summarizes recent strategic planning activities at Clemson University, focusing on outreach and extended education goals at the university. Specific benchmarks for outreach and extended education include: (1) by May 1994, each department will develop an operational definition of its public service mission, an action plan to integrate…
Graduate Entrepreneurship Incubation Environments: A Framework of Key Success Factors
ERIC Educational Resources Information Center
Al-Dajani, Haya; Dedoussis, Evangelos; Watson, Erika; Tzokas, Nikalaos
2014-01-01
The benchmarking framework developed in this study is specifically designed for higher education institutions to consider when developing environments to encourage entrepreneurship among their students, graduates and staff. The objective of the study was to identify key success factors of Graduate Entrepreneurship Incubator Environments (GEIEs)…
ICT Proficiency and Gender: A Validation on Training and Development
ERIC Educational Resources Information Center
Lin, Shinyi; Shih, Tse-Hua; Lu, Ruiling
2013-01-01
Use of innovative learning/instruction mode, embedded in the Certification Pathway System (CPS) developed by Certiport TM, is geared toward Internet and Computing Benchmark & Mentor specifically for IC[superscript 3] certification. The Internet and Computing Core Certification (IC[superscript 3]), as an industry-based credentialing program,…
Evaluating the Effectiveness of Student Assistance Programs in Pennsylvania.
ERIC Educational Resources Information Center
Fertman, Carl I.; Fichter, Cele; Schlesinger, Jo; Tarasevich, Susan; Wald, Holly; Zhang, Xiaoyan
2001-01-01
This evaluation of the Pennsylvania Student Assistance Program (SAP) was conducted to determine overall efficacy of SAPs and, more specifically, how SAP is currently implemented. Findings indicate that SAP is being implemented as designed. Recommended is the development of benchmarks and indicators focusing on best SAP practices and effectiveness.…
Benchmarking and Performance Measurement.
ERIC Educational Resources Information Center
Town, J. Stephen
This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W., II
1993-01-01
One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less
The KMAT: Benchmarking Knowledge Management.
ERIC Educational Resources Information Center
de Jager, Martha
Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…