Sample records for iaea gt-mhr benchmark

  1. Recombination-dependent mtDNA partitioning: in vivo role of Mhr1p to promote pairing of homologous DNA.

    PubMed

    Ling, Feng; Shibata, Takehiko

    2002-09-02

    Yeast mhr1-1 was isolated as a defective mutation in mitochondrial DNA (mtDNA) recombination. About half of mhr1-1 cells lose mtDNA during growth at a higher temperature. Here, we show that mhr1-1 exhibits a defect in the partitioning of nascent mtDNA into buds and is a base-substitution mutation in MHR1 encoding a mitochondrial matrix protein. We found that the Mhr1 protein (Mhr1p) has activity to pair single-stranded DNA and homologous double-stranded DNA to form heteroduplex joints in vitro, and that mhr1-1 causes the loss of this activity, indicating its role in homologous mtDNA recombination. While the majority of the mtDNA in the mother cells consists of head-to-tail concatemers, more than half of the mtDNA in the buds exists as genome-sized monomers. The mhr1-1 deltacce1 double mutant cells do not maintain any mtDNA, indicating the strict dependence of mtDNA maintenance on recombination functions. These results suggest a mechanism for mtDNA inheritance similar to that operating in the replication and packaging of phage DNA.

  2. Analysis of DNA-binding sites on Mhr1, a yeast mitochondrial ATP-independent homologous pairing protein.

    PubMed

    Masuda, Tokiha; Ling, Feng; Shibata, Takehiko; Mikawa, Tsutomu

    2010-03-01

    The Mhr1 protein is necessary for mtDNA homologous recombination in Saccharomyces cerevisiae. Homologous pairing (HP) is an essential reaction during homologous recombination, and is generally catalyzed by the RecA/Rad51 family of proteins in an ATP-dependent manner. Mhr1 catalyzes HP through a mechanism similar, at the DNA level, to that of the RecA/Rad51 proteins, but without utilizing ATP. However, it has no sequence homology with the RecA/Rad51 family proteins or with other ATP-independent HP proteins, and exhibits different requirements for DNA topology. We are interested in the structural features of the functional domains of Mhr1. In this study, we employed the native fluorescence of Mhr1's Trp residues to examine the energy transfer from the Trp residues to etheno-modified ssDNA bound to Mhr1. Our results showed that two of the seven Trp residues (Trp71 and Trp165) are spatially close to the bound DNA. A systematic analysis of mutant Mhr1 proteins revealed that Asp69 is involved in Mg(2+)-dependent DNA binding, and that multiple Lys and Arg residues located around Trp71 and Trp165 are involved in the DNA-binding activity of Mhr1. In addition, in vivo complementation analyses showed that a region around Trp165 is important for the maintenance of mtDNA. On the basis of these results, we discuss the function of the region surrounding Trp165.

  3. Mhr1p-dependent concatemeric mitochondrial DNA formation for generating yeast mitochondrial homoplasmic cells.

    PubMed

    Ling, Feng; Shibata, Takehiko

    2004-01-01

    Mitochondria carry many copies of mitochondrial DNA (mtDNA), but mt-alleles quickly segregate during mitotic growth through unknown mechanisms. Consequently, all mtDNA copies are often genetically homogeneous within each individual ("homoplasmic"). Our previous study suggested that tandem multimers ("concatemers") formed mainly by the Mhr1p (a yeast nuclear gene-encoded mtDNA-recombination protein)-dependent pathway are required for mtDNA partitioning into buds with concomitant monomerization. The transmission of a few randomly selected clones (as concatemers) of mtDNA into buds is a possible mechanism to establish homoplasmy. The current study provides evidence for this hypothesis as follows: the overexpression of MHR1 accelerates mt-allele-segregation in growing heteroplasmic zygotes, and mhr1-1 (recombination-deficient) causes its delay. The mt-allele-segregation rate correlates with the abundance of concatemers, which depends on Mhr1p. In G1-arrested cells, concatemeric mtDNA was labeled by [14C]thymidine at a much higher density than monomers, indicating concatemers as the immediate products of mtDNA replication, most likely in a rolling circle mode. After releasing the G1 arrest in the absence of [14C]thymidine, the monomers as the major species in growing buds of dividing cells bear a similar density of 14C as the concatemers in the mother cells, indicating that the concatemers in mother cells are the precursors of the monomers in buds.

  4. A role for MHR1, a gene required for mitochondrial genetic recombination, in the repair of damage spontaneously introduced in yeast mtDNA.

    PubMed

    Ling, F; Morioka, H; Ohtsuka, E; Shibata, T

    2000-12-15

    A nuclear recessive mutant in Saccharomyces cerevisiae, mhr1-1, is defective in mitochondrial genetic recombination at 30 degrees C and shows extensive vegetative petite induction by UV irradiation at 30 degrees C or when cultivated at a higher temperature (37 degrees C). It has been postulated that mitochondrial DNA (mtDNA) is oxidatively damaged by by-products of oxidative respiration. Since genetic recombination plays a critical role in DNA repair in various organisms, we tested the possibility that MHR1 plays a role in the repair of oxidatively damaged mtDNA using an enzyme assay. mtDNA isolated from cells grown under standard (aerobic) conditions contained a much higher level of DNA lesions compared with mtDNA isolated from anaerobically grown cells. Soon after a temperature shift from 30 to 37 degrees C the number of mtDNA lesions increased 2-fold in mhr1-1 mutant cells but not in MHR1 cells. Malonic acid, which decreased the oxidative stress in mitochondria, partially suppressed both petite induction and the temperature-induced increase in the amount of mtDNA damage in mhr1-1 cells at 37 degrees C. Thus, functional mitochondria require active MHR1, which keeps the extent of spontaneous oxidative damage in mtDNA within a tolerable level. These observations are consistent with MHR1 having a possible role in mtDNA repair.

  5. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collin, Blaise P.

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparisonmore » of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this

  6. Din7 and Mhr1 expression levels regulate double-strand-break–induced replication and recombination of mtDNA at ori5 in yeast

    PubMed Central

    Ling, Feng; Hori, Akiko; Yoshitani, Ayako; Niu, Rong; Yoshida, Minoru; Shibata, Takehiko

    2013-01-01

    The Ntg1 and Mhr1 proteins initiate rolling-circle mitochondrial (mt) DNA replication to achieve homoplasmy, and they also induce homologous recombination to maintain mitochondrial genome integrity. Although replication and recombination profoundly influence mitochondrial inheritance, the regulatory mechanisms that determine the choice between these pathways remain unknown. In Saccharomyces cerevisiae, double-strand breaks (DSBs) introduced by Ntg1 at the mitochondrial replication origin ori5 induce homologous DNA pairing by Mhr1, and reactive oxygen species (ROS) enhance production of DSBs. Here, we show that a mitochondrial nuclease encoded by the nuclear gene DIN7 (DNA damage inducible gene) has 5′-exodeoxyribonuclease activity. Using a small ρ− mtDNA bearing ori5 (hypersuppressive; HS) as a model mtDNA, we revealed that DIN7 is required for ROS-enhanced mtDNA replication and recombination that are both induced at ori5. Din7 overproduction enhanced Mhr1-dependent mtDNA replication and increased the number of residual DSBs at ori5 in HS-ρ− cells and increased deletion mutagenesis at the ori5 region in ρ+ cells. However, simultaneous overproduction of Mhr1 suppressed all of these phenotypes and enhanced homologous recombination. Our results suggest that after homologous pairing, the relative activity levels of Din7 and Mhr1 modulate the preference for replication versus homologous recombination to repair DSBs at ori5. PMID:23598996

  7. Din7 and Mhr1 expression levels regulate double-strand-break-induced replication and recombination of mtDNA at ori5 in yeast.

    PubMed

    Ling, Feng; Hori, Akiko; Yoshitani, Ayako; Niu, Rong; Yoshida, Minoru; Shibata, Takehiko

    2013-06-01

    The Ntg1 and Mhr1 proteins initiate rolling-circle mitochondrial (mt) DNA replication to achieve homoplasmy, and they also induce homologous recombination to maintain mitochondrial genome integrity. Although replication and recombination profoundly influence mitochondrial inheritance, the regulatory mechanisms that determine the choice between these pathways remain unknown. In Saccharomyces cerevisiae, double-strand breaks (DSBs) introduced by Ntg1 at the mitochondrial replication origin ori5 induce homologous DNA pairing by Mhr1, and reactive oxygen species (ROS) enhance production of DSBs. Here, we show that a mitochondrial nuclease encoded by the nuclear gene DIN7 (DNA damage inducible gene) has 5'-exodeoxyribonuclease activity. Using a small ρ(-) mtDNA bearing ori5 (hypersuppressive; HS) as a model mtDNA, we revealed that DIN7 is required for ROS-enhanced mtDNA replication and recombination that are both induced at ori5. Din7 overproduction enhanced Mhr1-dependent mtDNA replication and increased the number of residual DSBs at ori5 in HS-ρ(-) cells and increased deletion mutagenesis at the ori5 region in ρ(+) cells. However, simultaneous overproduction of Mhr1 suppressed all of these phenotypes and enhanced homologous recombination. Our results suggest that after homologous pairing, the relative activity levels of Din7 and Mhr1 modulate the preference for replication versus homologous recombination to repair DSBs at ori5.

  8. GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collin, Blaise Paul

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. •more » The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants

  9. G T-Mohr Start-up Reactivity Insertion Transient Analysis Using Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fard, Mehdi Reisi; Blue, Thomas E.; Miller, Don W.

    2006-07-01

    As a part of a Department of Energy-Nuclear Engineering Research Initiative (NERI) Project, we at OSU are investigating SiC semiconductor detectors as neutron power monitors for Generation IV power reactors. As a part of this project, we are investigating the power monitoring requirements for a specific type of Generation IV reactor, namely the GT-MHR. To evaluate the power monitoring requirements for the GT-MHR that are most demanding for a SiC diode power monitor, we have developed a Simulink model to study the transient behavior of the GT-MHR. In this paper, we describe the application of the Simulink code to themore » analysis of a series of Start-up Reactivity Insertion Transients (SURITs). The SURIT is considered to be a limiting protectable accident in terms of establishing the dynamic range of a SiC power monitor because of the low count rate of the detector during the start-up and absence of the reactivity feedback mechanism at the beginning of transient. The SURIT is studied with the ultimate goal of identifying combinations of 1) reactor power scram setpoints and 2) cram initiation times (the time in which a scram must be initiated once the setpoint is exceeded) for which the GT-MHR core is protected in the event of a continuous withdrawal of a control rod bank from the core from low powers. The SURIT is initiated by withdrawing a rod bank when the reactor is cold (300 K) and sub-critical at the BOEC (Beginning of Equilibrium Cycle) condition. Various initial power levels have been considered corresponding to various degrees of sub-criticality and various source strengths. An envelope of response is determined to establish which initial powers correspond to the worst case SURIT. (authors)« less

  10. An IAEA multi-technique X-ray spectrometry endstation at Elettra Sincrotrone Trieste: benchmarking results and interdisciplinary applications.

    PubMed

    Karydas, Andreas Germanos; Czyzycki, Mateusz; Leani, Juan José; Migliori, Alessandro; Osan, Janos; Bogovac, Mladen; Wrobel, Pawel; Vakula, Nikita; Padilla-Alvarez, Roman; Menk, Ralf Hendrik; Gol, Maryam Ghahremani; Antonelli, Matias; Tiwari, Manoj K; Caliri, Claudia; Vogel-Mikuš, Katarina; Darby, Iain; Kaiser, Ralf Bernd

    2018-01-01

    The International Atomic Energy Agency (IAEA) jointly with the Elettra Sincrotrone Trieste (EST) operates a multipurpose X-ray spectrometry endstation at the X-ray Fluorescence beamline (10.1L). The facility has been available to external users since the beginning of 2015 through the peer-review process of EST. Using this collaboration framework, the IAEA supports and promotes synchrotron-radiation-based research and training activities for various research groups from the IAEA Member States, especially those who have limited previous experience and resources to access a synchrotron radiation facility. This paper aims to provide a broad overview about various analytical capabilities, intrinsic features and performance figures of the IAEA X-ray spectrometry endstation through the measured results. The IAEA-EST endstation works with monochromatic X-rays in the energy range 3.7-14 keV for the Elettra storage ring operating at 2.0 or 2.4 GeV electron energy. It offers a combination of different advanced analytical probes, e.g. X-ray reflectivity, X-ray absorption fine-structure measurements, grazing-incidence X-ray fluorescence measurements, using different excitation and detection geometries, and thereby supports a comprehensive characterization for different kinds of nanostructured and bulk materials.

  11. The Poplar GT8E and GT8F Glycosyltransferases are Functional Orthologs of Arabidopsis PARVUS Involved in Gulcuronoxylan Biosynthesis

    EPA Science Inventory

    The poplar GT8E and GT8F glycosyltransferases have previously been shown to be associated with wood formation, but their roles in the biosynthesis of wood components are not known. Here, we show that PoGT8E and PoGT8F are expressed in vessels and fibers during wood formation and ...

  12. Hydrothermal Alteration of the Lower Oceanic Crust: Insight from OmanDP Holes GT1A and GT2A.

    NASA Astrophysics Data System (ADS)

    Harris, M.; Zihlmann, B.; Mock, D.; Akitou, T.; Teagle, D. A. H.; Kondo, K.; Deans, J. R.; Crispini, L.; Takazawa, E.; Coggon, J. A.; Kelemen, P. B.

    2017-12-01

    Hydrothermal circulation is a fundamental Earth process that is responsible for the cooling of newly formed ocean crust at mid ocean ridges and imparts a chemical signature on both the crust and the oceans. Despite decades of study, the critical samples necessary to resolve the role of hydrothermal circulation during the formation of the lower ocean crust have remained poorly sampled in the ocean basins. The Oman Drilling Project successfully cored 3 boreholes into the lower crust of the Semail ophiolite (Holes GT1A layered gabbros, GT2A foliated gabbros and GT3A dike/gabbro transition). These boreholes have exceptionally high recovery ( 100%) compared to rotary coring in the oceans and provide an unrivalled opportunity to quantitatively characterise the hydrothermal system in the lower oceanic crust. Hydrothermal alteration in Holes GT1A and GT2A is ubiquitous and manifests as secondary minerals replacing primary igneous phases and secondary minerals precipitated in hydrothermal veins and hydrothermal fault zones. Hole GT1A is characterised by total alteration intensities between 10 -100%, with a mean alteration intensity of 60%, and shows no overall trend downhole. However, there are discrete depth intervals (on the scale of 30 -100 m) where the total alteration intensity increases with depth. Alteration assemblages are dominated by chlorite + albite + amphibole, with variable abundances of epidote, clinozoisite and quartz. Hole GT1A intersected several hydrothermal fault zones, these range from 2-3 cm up to >1m in size and are associated with more complex secondary mineral assemblages. Hydrothermal veins are abundant throughout Hole GT1A, with a mean density of 37 vein/m. Hole GT2A is characterised by total alteration intensities between 6-100%, with a mean alteration intensity of 45%, and is highly variable downhole. Alteration halos and patches are slightly more abundant than in Hole GT1A. The secondary mineral assemblage is similar to Hole GT1A, but Hole GT2A

  13. Initial report of the physical property measurement, ChikyuOman core description Phase I: sheeted dike and gabbro boundary from ICDP Holes GT1A, GT2A and GT3A

    NASA Astrophysics Data System (ADS)

    Abe, N.; Okazaki, K.; Hatakeyama, K.; Ildefonse, B.; Leong, J. A. M.; Tateishi, Y.; Teagle, D. A. H.; Takazawa, E.; Kelemen, P. B.; Michibayashi, K.; Coggon, J. A.; Harris, M.; de Obeso, J. C.

    2017-12-01

    We report results on the physical property measurements of the core samples from ICDP Holes GT1A, GT2A and GT3A drilled at Samail Ophiolite, Sultanate of Oman. Cores from Holes GT1A and GT2A in the lower crust section are mainly composed of gabbros (gabbro and olivine gabbro), and small amounts of ultramafic rocks (wehrlite and dunite), while cores from Hole GT3A at the boundary between sheeted dikes and gabbro are mainly composed of basalt and diabase, followed by gabbros (gabbro, olivine gabbro and oxide gabbro), and less common felsic dikes, trondhjemite and tonalite, intrude the mafic rocks. Measurements of physical properties were undertaken to characterize recovered core material. Onboard the Drilling Vessel Chikyu, whole-round measurements included X-ray CT image, natural gamma radiation, and magnetic susceptibility for Leg 1, and additional P-wave velocity, gamma ray attenuation density, and electrical resistivity during Leg 2. Split-core point magnetic susceptibility and color spectroscopy were measured for all core sections. P-wave velocity, bulk/grain density and porosity of more than 500 discrete cube samples, and thermal conductivity on more than 240 pieces from the working half of the split core sections were also measured. Physical Properties of gabbroic rocks from Holes GT1A and GT2A are similar to typical oceanic gabbros from ODP and IODP expeditions at Atlantis Bank, Southwestern Indian Ridge (ODP Legs 118, 176 and 179; IODP Exp 360) and at Hess Deep, Eastern Pacific (ODP Leg 147 and IODP Exp. 345). Average P-wave velocity, bulk density, grain density, porosity and thermal conductivity are 6.7 km/s, 2.92 g/cm^3, 2.93 g/cm^3, 0.98% and 2.46 W/m/K, respectively. P-wave velocity of samples from all three holes is inversely correlated with porosity. No clear correlation between the original lithology and physical properties is observed. GT3A cores show a wider range (e.g., Vp from 2.2 to 7.1 km/s) of values for the measured physical properties

  14. Genetic variation of hepatitis B surface antigen among acute and chronic hepatitis B virus infections in The Netherlands.

    PubMed

    Cremer, Jeroen; Hofstraat, Sanne H I; van Heiningen, Francoise; Veldhuijzen, Irene K; van Benthem, Birgit H B; Benschop, Kimberley S M

    2018-05-24

    Genetic variation within hepatitis B surface antigen (HBsAg), in particular within the major hydrophobic region (MHR), is related to immune/vaccine and test failures and can have a significant impact on the vaccination and diagnosis of acute infection. This study shows, for the first time, variation among acute cases and compares the amino acid variation within the HBsAg between acute and chronic infections. We analyzed the virus isolated from 1231 acute and 585 chronic cases reported to an anonymized public health surveillance database between 2004 and 2014 in The Netherlands. HBsAg analysis revealed the circulation of 6 genotypes (Gt); GtA was the dominant genotype followed by GtD among both acute (68.2% and 17.4%, respectively) and chronic (34.9% and 34.2%, respectively) cases. Variation was the highest among chronic strains compared to that among acute strains. Both acute and chronic GtD showed the highest variation compared to that of other genotypes (P < .01). Substitutions within the MHR were found in 8.5% of the acute strains and 18.6% of the chronic strains. Specific MHR substitutions described to have an impact on vaccine/immune escape and/or HBsAg test failure were found among 4.1% of the acute strains and 7.0% of the chronic strains. In conclusion, we show a high variation of HBsAg among acute and chronic hepatitis B virus-infected cases in The Netherlands, in particular among those infected with GtD, and compare, for the first time, variation in frequencies between acute and chronic cases. Additional studies on the impact of these variations on vaccination and test failure need to be conducted, as well as whether HBsAg false-negative variants have been missed. © 2018 The Authors. Journal of Medical Virology Published by Wiley Periodicals, Inc.

  15. Benchmark map of forest carbon stocks in tropical regions across three continents.

    PubMed

    Saatchi, Sassan S; Harris, Nancy L; Brown, Sandra; Lefsky, Michael; Mitchard, Edward T A; Salas, William; Zutta, Brian R; Buermann, Wolfgang; Lewis, Simon L; Hagen, Stephen; Petrova, Silvia; White, Lee; Silman, Miles; Morel, Alexandra

    2011-06-14

    Developing countries are required to produce robust estimates of forest carbon stocks for successful implementation of climate change mitigation policies related to reducing emissions from deforestation and degradation (REDD). Here we present a "benchmark" map of biomass carbon stocks over 2.5 billion ha of forests on three continents, encompassing all tropical forests, for the early 2000s, which will be invaluable for REDD assessments at both project and national scales. We mapped the total carbon stock in live biomass (above- and belowground), using a combination of data from 4,079 in situ inventory plots and satellite light detection and ranging (Lidar) samples of forest structure to estimate carbon storage, plus optical and microwave imagery (1-km resolution) to extrapolate over the landscape. The total biomass carbon stock of forests in the study region is estimated to be 247 Gt C, with 193 Gt C stored aboveground and 54 Gt C stored belowground in roots. Forests in Latin America, sub-Saharan Africa, and Southeast Asia accounted for 49%, 25%, and 26% of the total stock, respectively. By analyzing the errors propagated through the estimation process, uncertainty at the pixel level (100 ha) ranged from ± 6% to ± 53%, but was constrained at the typical project (10,000 ha) and national (>1,000,000 ha) scales at ca. ± 5% and ca. ± 1%, respectively. The benchmark map illustrates regional patterns and provides methodologically comparable estimates of carbon stocks for 75 developing countries where previous assessments were either poor or incomplete.

  16. Registration of maize inbred line GT603

    USDA-ARS?s Scientific Manuscript database

    GT603 (Reg. No. xxxx, PI xxxxxx) is a yellow dent maize (Zea mays L.) inbred line developed and released by the USDA-ARS Crop Protection and Management Research Unit in cooperation with the University of Georgia Coastal Plain Experiment Station in 2010. GT603 was developed through seven generations ...

  17. Characterization of recombinant amylopullulanase (gt-apu) and truncated amylopullulanase (gt-apuT) of the extreme thermophile Geobacillus thermoleovorans NP33 and their action in starch saccharification.

    PubMed

    Nisha, M; Satyanarayana, T

    2013-07-01

    A gene encoding amylopullulanase (gt-apu) of the extremely thermophilic Geobacillus thermoleovorans NP33 was cloned and expressed in Escherichia coli. The gene has an open reading frame of 4,965 bp that encodes a protein of 1,655 amino acids with molecular mass of 182 kDa. The six conserved regions, characteristic of GH13 family, have been detected in gt-apu. The recombinant enzyme has only one active site for α-amylase and pullulanase activities based on the enzyme kinetic analyses in a system that contains starch as well as pullulan as competing substrates and response to inhibitors. The end-product analysis confirmed that this is an endoacting enzyme. The specific enzyme activities for α-amylase and pullulanase of the truncated amylopullulanase (gt-apuT) are higher than gt-apu. Both enzymes exhibited similar temperature (60 °C) and pH (7.0) optima, although gt-apuT possessed a higher thermostability than gt-apu. The overall catalytic efficiency (K(cat)/K(m)) of gt-apuT is greater than that of gt-apu, with almost similar substrate specificities. The C-terminal region of gt-apu appeared to be non-essential, and furthermore, it negatively affects the substrate binding and stability of the enzyme.

  18. Alleviation of heavy metal toxicity and phytostimulation of Brassica campestris L. by endophytic Mucor sp. MHR-7.

    PubMed

    Zahoor, Mahwish; Irshad, Muhammad; Rahman, Hazir; Qasim, Muhammad; Afridi, Sahib Gul; Qadir, Muhammad; Hussain, Anwar

    2017-08-01

    Heavy metal (HM) pollution is of great concern in countries like Pakistan where a huge proportion of human population is exposed to it. These toxic metals are making their way from water bodies to soil where it not only interferes with plant growth and development but also initiates serious health issues in human consuming the produce of such soils. Bioremediation is one of the most viable and efficient solution for the problem. Purpose of the current study was to isolate endophytic fungi from plants grown on HM contaminated soil and screen them for their ability to tolerate multiple HM including chromium (Cr 6+ ), manganese (Mn 2+ ), cobalt (Co 2+ ), copper (Cu 2+ ) and zinc (Zn 2+ ). Out of 27 isolated endophytes, only one strain (MHR-7) was selected for multiple heavy metals tolerance. The strain was identified as Mucor sp. by 18S and 28S ribosomal RNA internal transcribed spacer (ITS) 1 and 4 sequence homology. The strain effectively tolerated up to 900µgmL -1 of these heavy metals showing no remarkable effect on its growth. The adverse effect of the heavy metals, measured as reduction of the fungal growth increased with increasing concentration of the metals. The strain was able to remove 60-87% of heavy metals from broth culture when supplied with 300µgmL -1 of these metals. A trend of decline in bioremediation potential of the strain was observed with increasing amount of metals. The strain removed metals by biotransformation and/or accumulation of heavy metal in its hyphae. Application of Mucor sp. MHR-7 locked down HM in tis mycelium thereby making them less available to plant root reducing HM uptake and toxicity in mustard. Besides its bioremediation potential, the strain was also able to produce IAA, ACC deaminase and solubilize phosphate making it excellent phytostimulant fungus. It is concluded that MHR-7 is an excellent candidate for use as biofertilizer in fields affected with heavy metals. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Multi-color lightcurve observation of the asteroid (163249) 2002 GT

    NASA Astrophysics Data System (ADS)

    Oshima, M.; Abe, S.

    2014-07-01

    NASA's Deep Impact/EPOXI spacecraft plans to encounter the asteroid (163249) 2002 GT, classified as a PHA (Potentially Hazardous Asteroid), on January 4, 2020. However, the taxonomic type and spin state of 2002 GT remain to be determined. We have carried out ground-based multi-color (B-V-R-I) lightcurve observations taking advantage of the 2002 GT Characterization Campaign by NASA. Multi-color lightcurve measurements allow us to estimate the rotation period and obtain strong constraints on the shape and pole orientation. Here we found that the rotation period of 2002 GT is estimated to be 3.7248 ± 0.1664 h. In mid-2013, 2002 GT passed at 0.015 au from the Earth, resulting an exceptional opportunity for ground-based characterization. Using the 0.81-m telescope of the Tenagra Observatory (110°52'44.8''W, +31°27'44.4''N, 1312 m) in Arizona, USA, and the Johnson-Cousins BVRI filters, we have found lightcurves of 2002 GT (Figure). The Tenagra II 0.81-m telescope is used for research of the Hayabusa2 target Asteroid (162173) 1999 JU_3. The lightcurves (relative magnitude) show that the rotation period of 2002 GT, the target of NASA's Deep Impact/EPOXI spacecraft, is estimated to be 3.7248 ± 0.1664 hr. On June 9, 2013, we had 7 hours of ground-based observations on 2002 GT from 4:00 to 11:00 UTC. The number of comparison stars for differential photometry was 34. Because of tracking the fast-moving asteroid, it was necessary to have the same comparison star among the fields of vision. We have also obtained absolute photometry of 2002 GT on June 13, 2013.

  20. Malignant pericytes expressing GT198 give rise to tumor cells through angiogenesis.

    PubMed

    Zhang, Liyong; Wang, Yan; Rashid, Mohammad H; Liu, Min; Angara, Kartik; Mivechi, Nahid F; Maihle, Nita J; Arbab, Ali S; Ko, Lan

    2017-08-01

    Angiogenesis promotes tumor development. Understanding the crucial factors regulating tumor angiogenesis may reveal new therapeutic targets. Human GT198 ( PSMC3IP or Hop2) is an oncoprotein encoded by a DNA repair gene that is overexpressed in tumor stromal vasculature to stimulate the expression of angiogenic factors. Here we show that pericytes expressing GT198 give rise to tumor cells through angiogenesis. GT198 + pericytes and perivascular cells are commonly present in the stromal compartment of various human solid tumors and rodent xenograft tumor models. In human oral cancer, GT198 + pericytes proliferate into GT198 + tumor cells, which migrate into lymph nodes. Increased GT198 expression is associated with increased lymph node metastasis and decreased progression-free survival in oral cancer patients. In rat brain U-251 glioblastoma xenografts, GT198 + pericytes of human tumor origin encase endothelial cells of rat origin to form mosaic angiogenic blood vessels, and differentiate into pericyte-derived tumor cells. The net effect is continued production of glioblastoma tumor cells from malignant pericytes via angiogenesis. In addition, activation of GT198 induces the expression of VEGF and promotes tube formation in cultured U251 cells. Furthermore, vaccination using GT198 protein as an antigen in mouse xenograft of GL261 glioma delayed tumor growth and prolonged mouse survival. Together, these findings suggest that GT198-expressing malignant pericytes can give rise to tumor cells through angiogenesis, and serve as a potential source of cells for distant metastasis. Hence, the oncoprotein GT198 has the potential to be a new target in anti-angiogenic therapies in human cancer.

  1. SIGACE Code for Generating High-Temperature ACE Files; Validation and Benchmarking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Amit R.; Ganesan, S.; Trkov, A.

    2005-05-24

    A code named SIGACE has been developed as a tool for MCNP users within the scope of a research contract awarded by the Nuclear Data Section of the International Atomic Energy Agency (IAEA) (Ref: 302-F4-IND-11566 B5-IND-29641). A new recipe has been evolved for generating high-temperature ACE files for use with the MCNP code. Under this scheme the low-temperature ACE file is first converted to an ENDF formatted file using the ACELST code and then Doppler broadened, essentially limited to the data in the resolved resonance region, to any desired higher temperature using SIGMA1. The SIGACE code then generates a high-temperaturemore » ACE file for use with the MCNP code. A thinning routine has also been introduced in the SIGACE code for reducing the size of the ACE files. The SIGACE code and the recipe for generating ACE files at higher temperatures has been applied to the SEFOR fast reactor benchmark problem (sodium-cooled fast reactor benchmark described in ENDF-202/BNL-19302, 1974 document). The calculated Doppler coefficient is in good agreement with the experimental value. A similar calculation using ACE files generated directly with the NJOY system also agrees with our SIGACE computed results. The SIGACE code and the recipe is further applied to study the numerical benchmark configuration of selected idealized PWR pin cell configurations with five different fuel enrichments as reported by Mosteller and Eisenhart. The SIGACE code that has been tested with several FENDL/MC files will be available, free of cost, upon request, from the Nuclear Data Section of the IAEA.« less

  2. Energy benchmarking of commercial buildings: a low-cost pathway toward urban sustainability

    NASA Astrophysics Data System (ADS)

    Cox, Matt; Brown, Marilyn A.; Sun, Xiaojing

    2013-09-01

    US cities are beginning to experiment with a regulatory approach to address information failures in the real estate market by mandating the energy benchmarking of commercial buildings. Understanding how a commercial building uses energy has many benefits; for example, it helps building owners and tenants identify poor-performing buildings and subsystems and it enables high-performing buildings to achieve greater occupancy rates, rents, and property values. This paper estimates the possible impacts of a national energy benchmarking mandate through analysis chiefly utilizing the Georgia Tech version of the National Energy Modeling System (GT-NEMS). Correcting input discount rates results in a 4.0% reduction in projected energy consumption for seven major classes of equipment relative to the reference case forecast in 2020, rising to 8.7% in 2035. Thus, the official US energy forecasts appear to overestimate future energy consumption by underestimating investments in energy-efficient equipment. Further discount rate reductions spurred by benchmarking policies yield another 1.3-1.4% in energy savings in 2020, increasing to 2.2-2.4% in 2035. Benchmarking would increase the purchase of energy-efficient equipment, reducing energy bills, CO2 emissions, and conventional air pollution. Achieving comparable CO2 savings would require more than tripling existing US solar capacity. Our analysis suggests that nearly 90% of the energy saved by a national benchmarking policy would benefit metropolitan areas, and the policy’s benefits would outweigh its costs, both to the private sector and society broadly.

  3. EXTRAVEHICULAR ACTIVITY (EVA) - GEMINI-TITAN (GT)-4

    NASA Image and Video Library

    1965-06-03

    S65-29766 (3 June 1965) --- Astronaut Edward H. White II, pilot for the Gemini-Titan 4 (GT-4) spaceflight, floats in the zero-gravity of space during the third revolution of the GT-4 spacecraft. White wears a specially designed spacesuit. His face is shaded by a gold-plated visor to protect him from unfiltered rays of the sun. In his right hand he carries a Hand-Held Self-Maneuvering Unit (HHSMU) that gives him control over his movements in space. White also wears an emergency oxygen chest pack; and he carries a camera mounted on the HHSMU for taking pictures of the sky, Earth and the GT-4 spacecraft. He is secured to the spacecraft by a 25-feet umbilical line and a 23-feet tether line. Both lines are wrapped together in gold tape to form one cord. Astronaut James A. McDivitt, command pilot, remained inside the spacecraft during the extravehicular activity (EVA). Photo credit: NASA EDITOR'S NOTE: Astronaut Edward H. White II died in the Apollo/Saturn 204 fire at Cape Kennedy on Jan. 27, 1967.

  4. GT-CATS: Tracking Operator Activities in Complex Systems

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.; Mitchell, Christine M.; Palmer, Everett A.

    1999-01-01

    Human operators of complex dynamic systems can experience difficulties supervising advanced control automation. One remedy is to develop intelligent aiding systems that can provide operators with context-sensitive advice and reminders. The research reported herein proposes, implements, and evaluates a methodology for activity tracking, a form of intent inferencing that can supply the knowledge required for an intelligent aid by constructing and maintaining a representation of operator activities in real time. The methodology was implemented in the Georgia Tech Crew Activity Tracking System (GT-CATS), which predicts and interprets the actions performed by Boeing 757/767 pilots navigating using autopilot flight modes. This report first describes research on intent inferencing and complex modes of automation. It then provides a detailed description of the GT-CATS methodology, knowledge structures, and processing scheme. The results of an experimental evaluation using airline pilots are given. The results show that GT-CATS was effective in predicting and interpreting pilot actions in real time.

  5. Comparison of Homogeneous and Heterogeneous CFD Fuel Models for Phase I of the IAEA CRP on HTR Uncertainties Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Su-Jong Yoon

    2014-04-01

    Computational Fluid Dynamics (CFD) evaluation of homogeneous and heterogeneous fuel models was performed as part of the Phase I calculations of the International Atomic Energy Agency (IAEA) Coordinate Research Program (CRP) on High Temperature Reactor (HTR) Uncertainties in Modeling (UAM). This study was focused on the nominal localized stand-alone fuel thermal response, as defined in Ex. I-3 and I-4 of the HTR UAM. The aim of the stand-alone thermal unit-cell simulation is to isolate the effect of material and boundary input uncertainties on a very simplified problem, before propagation of these uncertainties are performed in subsequent coupled neutronics/thermal fluids phasesmore » on the benchmark. In many of the previous studies for high temperature gas cooled reactors, the volume-averaged homogeneous mixture model of a single fuel compact has been applied. In the homogeneous model, the Tristructural Isotropic (TRISO) fuel particles in the fuel compact were not modeled directly and an effective thermal conductivity was employed for the thermo-physical properties of the fuel compact. On the contrary, in the heterogeneous model, the uranium carbide (UCO), inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers of the TRISO fuel particles are explicitly modeled. The fuel compact is modeled as a heterogeneous mixture of TRISO fuel kernels embedded in H-451 matrix graphite. In this study, a steady-state and transient CFD simulations were performed with both homogeneous and heterogeneous models to compare the thermal characteristics. The nominal values of the input parameters are used for this CFD analysis. In a future study, the effects of input uncertainties in the material properties and boundary parameters will be investigated and reported.« less

  6. Participation in proficiency test for tritium strontium and caesium isotopes in seawater 2015 (IAEA-RML-2015-02)

    NASA Astrophysics Data System (ADS)

    Visetpotjanakit, S.; Kaewpaluek, S.

    2017-06-01

    A proficiency test (PT) exercise has proposed by the International Atomic Energy Agency (IAEA) in the frame of the IAEA Technical Cooperation project RAS/7/021 “Marine benchmark study on the possible impact of the Fukushima radioactive releases in the Asia-Pacific Region for Caesium Determination in Sea Water” since 2012. In 2015 the exercise was referred to Proficiency Test for Tritium, Strontium and Caesium Isotopes in Seawater 2015 (IAEA-RML-2015-02) to analyse3H, 134Cs, 137Cs and90Sr in a seawater sample. OAP was one of the 17 laboratories from 15 countries from Asia-Pacific Region who joined the PT exercise. The aim of our participation was to validate our analytical performance for the accurate determination of radionuclides in seawater by developed methods of radiochemical analysis. OAP submitted results determining the concentration for the three elements i.e. 134Cs, 137Cs and90Sr in seawater to the IAEA. A critical review was made to check suitability of our methodology and the criteria for the accuracy, precision and trueness of our data. The results of both 134Cs and 137Cs passed all criteria which were assigned “Accepted” statuses. Whereas 90Sr analysis did not pass the accuracy test therefore it was considered as “Not accepted” Our results and all other participant results with critical comments were published in the IAEA proficiency test report.

  7. IAEA coordinated research project on thermal-hydraulics of Supercritical Water-Cooled Reactors (SCWRs)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, K.; Aksan, S. N.

    The Supercritical Water-Cooled Reactor (SCWR) is an innovative water-cooled reactor concept, which uses supercritical pressure water as reactor coolant. It has been attracting interest of many researchers in various countries mainly due to its benefits of high thermal efficiency and simple primary systems, resulting in low capital cost. The IAEA started in 2008 a Coordinated Research Project (CRP) on Thermal-Hydraulics of SCWRs as a forum to foster the exchange of technical information and international collaboration in research and development. This paper summarizes the activities and current status of the CRP, as well as major progress achieved to date. At present,more » 15 institutions closely collaborate in several tasks. Some organizations have been conducting thermal-hydraulics experiments and analysing the data, and others have been participating in code-to-test and/or code-to-code benchmark exercises. The expected outputs of the CRP are also discussed. Finally, the paper introduces several IAEA activities relating to or arising from the CRP. (authors)« less

  8. Benchmark map of forest carbon stocks in tropical regions across three continents

    PubMed Central

    Saatchi, Sassan S.; Harris, Nancy L.; Brown, Sandra; Lefsky, Michael; Mitchard, Edward T. A.; Salas, William; Zutta, Brian R.; Buermann, Wolfgang; Lewis, Simon L.; Hagen, Stephen; Petrova, Silvia; White, Lee; Silman, Miles; Morel, Alexandra

    2011-01-01

    Developing countries are required to produce robust estimates of forest carbon stocks for successful implementation of climate change mitigation policies related to reducing emissions from deforestation and degradation (REDD). Here we present a “benchmark” map of biomass carbon stocks over 2.5 billion ha of forests on three continents, encompassing all tropical forests, for the early 2000s, which will be invaluable for REDD assessments at both project and national scales. We mapped the total carbon stock in live biomass (above- and belowground), using a combination of data from 4,079 in situ inventory plots and satellite light detection and ranging (Lidar) samples of forest structure to estimate carbon storage, plus optical and microwave imagery (1-km resolution) to extrapolate over the landscape. The total biomass carbon stock of forests in the study region is estimated to be 247 Gt C, with 193 Gt C stored aboveground and 54 Gt C stored belowground in roots. Forests in Latin America, sub-Saharan Africa, and Southeast Asia accounted for 49%, 25%, and 26% of the total stock, respectively. By analyzing the errors propagated through the estimation process, uncertainty at the pixel level (100 ha) ranged from ±6% to ±53%, but was constrained at the typical project (10,000 ha) and national (>1,000,000 ha) scales at ca. ±5% and ca. ±1%, respectively. The benchmark map illustrates regional patterns and provides methodologically comparable estimates of carbon stocks for 75 developing countries where previous assessments were either poor or incomplete. PMID:21628575

  9. The Gediz River fluvial archive: A benchmark for Quaternary research in Western Anatolia

    NASA Astrophysics Data System (ADS)

    Maddy, D.; Veldkamp, A.; Demir, T.; van Gorp, W.; Wijbrans, J. R.; van Hinsbergen, D. J. J.; Dekkers, M. J.; Schreve, D.; Schoorl, J. M.; Scaife, R.; Stemerdink, C.; van der Schriek, T.; Bridgland, D. R.; Aytaç, A. S.

    2017-06-01

    The Gediz River, one of the principal rivers of Western Anatolia, has an extensive Pleistocene fluvial archive that potentially offers a unique window into fluvial system behaviour on the western margins of Asia during the Quaternary. In this paper we review our work on the Quaternary Gediz River Project (2001-2010) and present new data which leads to a revised stratigraphical model for the Early Pleistocene development of this fluvial system. In previous work we confirmed the preservation of eleven buried Early Pleistocene fluvial terraces of the Gediz River (designated GT11, the oldest and highest, to GT1, the youngest and lowest) which lie beneath the basalt-covered plateaux of the Kula Volcanic Province. Deciphering the information locked in this fluvial archive requires the construction of a robust geochronology. Fortunately, the Gediz archive provides ample opportunity for age-constraint based upon age estimates derived from basaltic lava flows that repeatedly entered the palaeo-Gediz valley floors. In this paper we present, for the first time, our complete dataset of 40Ar/39Ar age estimates and associated palaeomagnetic measurements. These data, which can be directly related to the underlying fluvial deposits, provide age constraints critical to our understanding of this sequence. The new chronology establishes the onset of Quaternary volcanism at ∼1320ka (MIS42). This volcanism, which is associated with GT6, confirms a pre-MIS42 age for terraces GT11-GT7. Evidence from the colluvial sequences directly overlying these early terraces suggests that they formed in response to hydrological and sediment budget changes forced by climate-driven vegetation change. The cyclic formation of terraces and their timing suggests they represent the obliquity-driven climate changes of the Early Pleistocene. By way of contrast the GT5-GT1 terrace sequence, constrained by a lava flow with an age estimate of ∼1247ka, span the time-interval MIS42 - MIS38 and therefore do not

  10. Mirror asymmetry for B(GT) of {sup 24}Si induced by Thomas-Ehrman shift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ichikawa, Y.; Kubo, T.; Aoi, N.

    We carried out the beta-decay spectroscopy on {sup 24}Si in order to investigate a change in configuration in the wave function induced by Thomas-Ehrman shift from a perspective of mirror asymmetry of B(GT). We observed two beta transitions to low-lying bound states in {sup 24}Al for the first time. In this proceeding, the B(GT) of {sup 24}Si is compared with that of the mirror nucleus {sup 24}Ne, and the mirror asymmetry of B(GT) is determined. Then the origin of the B(GT) asymmetry is discussed through the comparison with theoretical calculations.

  11. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  12. GT-9 TEST - ASTRONAUT EDWARD H. WHITE -- MISCILANIES

    NASA Image and Video Library

    1965-06-03

    S65-19600 (3 June 1965) --- The prime crew for the Gemini-Titan 4 mission have an early morning breakfast prior to their historic flight which was launched at 10:16 a.m. (EST) on June 3, 1965. Shown here seated around the table (clockwise starting front center) are Dr. D. Owens Coons, chief, MSC Center Medical Office; astronaut James A. McDivitt, GT-4 command pilot; Dr. Eugene F. Tubbs, Kennedy Space Center; Rt. Rev. James Heiliky, McDivitt's priest at Cocoa Beach, Florida; Msgr. Irvine J. Nugent and astronaut Edward H. White II, GT-4 pilot. The group had a breakfast of tomato juice, broiled sirloin steak, poached eggs, toast, strawberry gelatin and coffee.

  13. Genetically Modified Flax Expressing NAP-SsGT1 Transgene: Examination of Anti-Inflammatory Action

    PubMed Central

    Matusiewicz, Magdalena; Kosieradzka, Iwona; Zuk, Magdalena; Szopa, Jan

    2014-01-01

    The aim of the work was to define the influence of dietary supplementation with GM (genetically modified) GT#4 flaxseed cake enriched in polyphenols on inflammation development in mice liver. Mice were given ad libitum isoprotein diets: (1) standard diet; (2) high-fat diet rich in lard, high-fat diet enriched with 30% of (3) isogenic flax Linola seed cake; and (4) GM GT#4 flaxseed cake; for 96 days. Administration of transgenic and isogenic seed cake lowered body weight gain, of transgenic to the standard diet level. Serum total antioxidant status was statistically significantly improved in GT#4 flaxseed cake group and did not differ from Linola. Serum thiobarbituric acid reactive substances, lipid profile and the liver concentration of pro-inflammatory cytokine tumor necrosis factor-α were ameliorated by GM and isogenic flaxseed cake consumption. The level of pro-inflammatory cytokine interferon-γ did not differ between mice obtaining GM GT#4 and non-GM flaxseed cakes. The C-reactive protein concentration was reduced in animals fed GT#4 flaxseed cake and did not differ from those fed non-GM flaxseed cake-based diet. Similarly, the liver structure of mice consuming diets enriched in flaxseed cake was improved. Dietetic enrichment with GM GT#4 and non-GM flaxseed cakes may be a promising solution for health problems resulting from improper diet. PMID:25247574

  14. GT-57633 catalogue of Martian impact craters developed for evaluation of crater detection algorithms

    NASA Astrophysics Data System (ADS)

    Salamunićcar, Goran; Lončarić, Sven

    2008-12-01

    Crater detection algorithms (CDAs) are an important subject of the recent scientific research. A ground truth (GT) catalogue, which contains the locations and sizes of known craters, is important for the evaluation of CDAs in a wide range of CDA applications. Unfortunately, previous catalogues of craters by other authors cannot be easily used as GT. In this paper, we propose a method for integration of several existing catalogues to obtain a new craters catalogue. The methods developed and used during this work on the GT catalogue are: (1) initial screening of used catalogues; (2) evaluation of self-consistency of used catalogues; (3) initial registration from three different catalogues; (4) cross-evaluation of used catalogues; (5) additional registrations and registrations from additional catalogues; and (6) fine-tuning and registration with additional data-sets. During this process, all craters from all major currently available manually assembled catalogues were processed, including catalogues by Barlow, Rodionova, Boyce, Kuzmin, and our previous work. Each crater from the GT catalogue contains references to crater(s) that are used for its registration. This provides direct access to all properties assigned to craters from the used catalogues, which can be of interest even to those scientists that are not directly interested in CDAs. Having all these craters in a single catalogue also provides a good starting point for searching for craters still not catalogued manually, which is also expected to be one of the challenges of CDAs. The resulting new GT catalogue contains 57,633 craters, significantly more than any previous catalogue. From this point of view, GT-57633 catalogue is currently the most complete catalogue of large Martian impact craters. Additionally, each crater from the resulting GT-57633 catalogue is aligned with MOLA topography and, during the final review phase, additionally registered/aligned with 1/256° THEMIS-DIR, 1/256° MDIM and 1/256° MOC

  15. [NRC/GT: Six Year One Research Studies.

    ERIC Educational Resources Information Center

    Gubbins, E. Jean, Ed.

    1992-01-01

    This newsletter focuses on six Year 1 research projects associated with the National Research Center on the Gifted and Talented (NRC/GT). The updates address: "Regular Classroom Practices With Gifted Students: Findings from the Classroom Practices Survey" (Francis X. Archambault, Jr. and others); "The Classroom Practices Study:…

  16. The National Research Center on the Gifted and Talented (NRC/GT) Newsletter, 1998.

    ERIC Educational Resources Information Center

    Gubbins, E. Jean, Ed.; Siegle, Del, Ed.

    1998-01-01

    These two newsletters of The National Research Center on the Gifted and Talented (NRC/GT) present articles concerned with research on the education of gifted and talented students. The articles are: "NRC/GT's Suggestions: Evaluating Your Programs and Services" (E. Jean Gubbins); "Professional Development Practices in Gifted Education: Results of a…

  17. Productive infection of Epstein-Barr virus (EBV) in EBV-genome-positive epithelial cell lines (GT38 and GT39) derived from gastric tissues.

    PubMed

    Takasaka, N; Tajima, M; Okinaga, K; Satoh, Y; Hoshikawa, Y; Katsumoto, T; Kurata, T; Sairenji, T

    1998-08-01

    We characterized the expression of Epstein-Barr virus (EBV) on two epithelial cell lines, GT38 and GT39, derived from human gastric tissues. The EBV nuclear antigen (EBNA) was detected in all cells of both cell lines. The EBV immediate-early BZLF 1 protein (ZEBRA), the early antigen diffuse component (EA-D), and one of the EBV envelope proteins (gp350/220) were expressed spontaneously in small proportions in the cells. EBNA 1, EBNA2, latent membrane protein 1, ZEBRA, and EA-D molecules were then observed by Western blotting in the cells. The lytic cycle was enhanced with treatment with 12-O-tetradecanoylphorbol-13-acetate (TPA) or n-butyrate. The virus particles were observed in the TPA treated GT38 cells by electron microscopy. Infectious EBV was detected with the transformation of cord blood lymphocytes and also with the induction of early antigen to Raji cells by the supernatants of both cells lines. A major single and minor multiple fused terminal fragments and a ladder of smaller fragments of the EBV genome were detected with a Xhol probe in both cell lines. These epithelial cells lines and viruses will be useful in studying their association with EBV in gastric epithelial cells.

  18. GEMINI-TITAN (GT)-4 - EARTH-SKY - OUTER SPACE

    NASA Image and Video Library

    1965-06-03

    S65-34776 (3-7 June 1965) --- This photograph shows the Nile Delta, Egypt, the Suez Canal, Israel, Jordan, Syria, Saudi Arabia, and Iraq as seen from the Gemini-Titan 4 (GT-4) spacecraft during its 12th revolution of Earth.

  19. Recommended observational skills training for IAEA safeguards inspections. Final report: Recommended observational skills training for IAEA safeguards inspections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toquam, J.L.; Morris, F.A.

    This is the second of two reports prepared to assist the International Atomic Energy Agency (IAEA or Agency) in enhancing the effectiveness of its international safeguards inspections through inspector training in {open_quotes}Observational Skills{close_quotes}. The first (Phase 1) report was essentially exploratory. It defined Observational Skills broadly to include all appropriate cognitive, communications, and interpersonal techniques that have the potential to help IAEA safeguards inspectors function more effectively. It identified 10 specific Observational Skills components, analyzed their relevance to IAEA safeguards inspections, and reviewed a variety of inspection programs in the public and private sectors that provide training in one ormore » more of these components. The report concluded that while it should be possible to draw upon these other programs in developing Observational Skills training for IAEA inspectors, the approaches utilized in these programs will likely require significant adaption to support the specific job requirements, policies, and practices that define the IAEA inspector`s job. The overall objective of this second (Phase 2) report is to provide a basis for the actual design and delivery of Observational Skills training to IAEA inspectors. The more specific purposes of this report are to convey a fuller understanding of the potential application of Observational Skills to the inspector`s job, describe inspector perspectives on the relevance and importance of particular Observational Skills, identify the specific Observational Skill components that are most important and relevant to enhancing safeguards inspections, and make recommendations as to Observational Skills training for the IAEA`s consideration in further developing its Safeguards training program.« less

  20. All inclusive benchmarking.

    PubMed

    Ellis, Judith

    2006-07-01

    The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of

  1. LU60645GT and MA132843GT Catalogues of Lunar and Martian Impact Craters Developed Using a Crater Shape-based Interpolation Crater Detection Algorithm for Topography Data

    NASA Technical Reports Server (NTRS)

    Salamuniccar, Goran; Loncaric, Sven; Mazarico, Erwan Matias

    2012-01-01

    For Mars, 57,633 craters from the manually assembled catalogues and 72,668 additional craters identified using several crater detection algorithms (CDAs) have been merged into the MA130301GT catalogue. By contrast, for the Moon the most complete previous catalogue contains only 14,923 craters. Two recent missions provided higher-quality digital elevation maps (DEMs): SELENE (in 1/16° resolution) and Lunar Reconnaissance Orbiter (we used up to 1/512°). This was the main motivation for work on the new Crater Shape-based interpolation module, which improves previous CDA as follows: (1) it decreases the number of false-detections for the required number of true detections; (2) it improves detection capabilities for very small craters; and (3) it provides more accurate automated measurements of craters' properties. The results are: (1) LU60645GT, which is currently the most complete (up to D>=8 km) catalogue of Lunar craters; and (2) MA132843GT catalogue of Martian craters complete up to D>=2 km, which is the extension of the previous MA130301GT catalogue. As previously achieved for Mars, LU60645GT provides all properties that were provided by the previous Lunar catalogues, plus: (1) correlation between morphological descriptors from used catalogues; (2) correlation between manually assigned attributes and automated measurements; (3) average errors and their standard deviations for manually and automatically assigned attributes such as position coordinates, diameter, depth/diameter ratio, etc; and (4) a review of positional accuracy of used datasets. Additionally, surface dating could potentially be improved with the exhaustiveness of this new catalogue. The accompanying results are: (1) the possibility of comparing a large number of Lunar and Martian craters, of e.g. depth/diameter ratio and 2D profiles; (2) utilisation of a method for re-projection of datasets and catalogues, which is very useful for craters that are very close to poles; and (3) the extension of the

  2. A technique for measurement of earth station antenna G/T by radio stars and Applications Technology Satellites.

    NASA Technical Reports Server (NTRS)

    Kochevar, H. J.

    1972-01-01

    A new technique has been developed to accurately measure the G/T of a small aperture antenna using geostationary satellites and the well established radio star method. A large aperture antenna having the capability of accurately measuring its G/T by using a radio star of known power density is used to obtain an accurate G/T to use as a reference. The CNR of both the large and small aperture antennas are then measured using an Applications Technology Satellite (ATS). After normalizing the two C/N ratios to the large antenna system noise temperature the G/T or the gain G of the small aperture antenna can then be determined.

  3. 78 FR 29810 - Receipt of Petition for Decision That Nonconforming 2003 BMW K 1200 GT Motorcycles Are Eligible...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-21

    ...-0061; Notice 1] Receipt of Petition for Decision That Nonconforming 2003 BMW K 1200 GT Motorcycles Are... (NHTSA) of a petition for a decision that 2003 BMW K 1200 GT Motorcycles that were not originally...-005) has petitioned NHTSA to decide whether non-U.S. certified 2003 BMW K 1200 GT motorcycles are...

  4. Influence of adiponectin gene polymorphism SNP276 (G/T) on adiponectin in response to exercise training.

    PubMed

    Huang, Hu; Tada Iida, Kaoruko; Murakami, Haruka; Saito, Yoko; Otsuki, Takeshi; Iemitsu, Motoyuki; Maeda, Seiji; Sone, Hirohito; Kuno, Shinya; Ajisaka, Ryuichi

    2007-12-01

    Adiponectin is an adipocytokine that is involved in insulin sensitivity. The adiponectin gene contains a single nucleotide polymorphism (SNP) at position 276 (G/T). The GG genotype of SNP276 (G/T) is associated with lower plasma adiponectin levels and a higher insulin resistance index. Therefore, we examined the influence of SNP276 (G/T) on the plasma level of adiponectin in response to exercise training. Thirty healthy Japanese (M12/F18; 56 to 79 years old) performed both resistance and endurance training, 5 times a week for 6 months. The work rate per kg of weight at double-product break-point (DPBP) was measured. Blood samples were obtained before and after the experiment. Plasma concentrations of adiponectin, HbA1c, insulin, glucose, total, high-density lipoprotein (HDL), and low-density lipoprotein (LDL) cholesterol, and triglyceride were measured. Genotypes of SNP276 were specified. Student's t-test for paired values and unpaired values was used. After the 6-month training period, the work rate per kg of weight at DPBP and the plasma HDL-cholesterol level were significantly improved (P<0.05), while no change was observed in the total plasma adiponectin level. However, the plasma adiponectin level in those with the GT + TT genotype had significantly increased (P<0.05). Additionally, the degree of the decrease in the HOMA-R level was significantly greater in the subjects with the GT + TT genotype than those with the GG genotype (p<0.05). Our results suggest that subjects with the genotype GT + TT at SNP276 (G/T) have a greater adiponectin-related response to exercise training than those with the GG genotype.

  5. 10 CFR 75.7 - Notification of IAEA safeguards.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Notification of IAEA safeguards. 75.7 Section 75.7 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) SAFEGUARDS ON NUCLEAR MATERIAL-IMPLEMENTATION OF US/IAEA AGREEMENT General Provisions § 75.7 Notification of IAEA safeguards. (a) The licensee must inform the NRC...

  6. 10 CFR 75.7 - Notification of IAEA safeguards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Notification of IAEA safeguards. 75.7 Section 75.7 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) SAFEGUARDS ON NUCLEAR MATERIAL-IMPLEMENTATION OF US/IAEA AGREEMENT General Provisions § 75.7 Notification of IAEA safeguards. (a) The licensee must inform the NRC...

  7. Improving the Transparency of IAEA Safeguards Reporting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toomey, Christopher; Hayman, Aaron M.; Wyse, Evan T.

    2011-07-17

    In 2008, the Standing Advisory Group on Safeguards Implementation (SAGSI) indicated that the International Atomic Energy Agency's (IAEA) Safeguards Implementation Report (SIR) has not kept pace with the evolution of safeguards and provided the IAEA with a set of recommendations for improvement. The SIR is the primary mechanism for providing an overview of safeguards implementation in a given year and reporting on the annual safeguards findings and conclusions drawn by the Secretariat. As the IAEA transitions to State-level safeguards approaches, SIR reporting must adapt to reflect these evolutionary changes. This evolved report will better reflect the IAEA's transition to amore » more qualitative and information-driven approach, based upon State-as-a-whole considerations. This paper applies SAGSI's recommendations to the development of multiple models for an evolved SIR and finds that an SIR repurposed as a 'safeguards portal' could significantly enhance information delivery, clarity, and transparency. In addition, this paper finds that the 'portal concept' also appears to have value as a standardized information presentation and analysis platform for use by Country Officers, for continuity of knowledge purposes, and the IAEA Secretariat in the safeguards conclusion process. Accompanying this paper is a fully functional prototype of the 'portal' concept, built using commercial software and IAEA Annual Report data.« less

  8. Certified reference materials for radionuclides in Bikini Atoll sediment (IAEA-410) and Pacific Ocean sediment (IAEA-412).

    PubMed

    Pham, M K; van Beek, P; Carvalho, F P; Chamizo, E; Degering, D; Engeler, C; Gascó, C; Gurriaran, R; Hanley, O; Harms, A V; Herrmann, J; Hult, M; Ikeuchi, Y; Ilchmann, C; Kanisch, G; Kis-Benedek, G; Kloster, M; Laubenstein, M; Llaurado, M; Mas, J L; Nakano, M; Nielsen, S P; Osvath, I; Povinec, P P; Rieth, U; Schikowski, J; Smedley, P A; Suplinska, M; Sýkora, I; Tarjan, S; Varga, B; Vasileva, E; Zalewska, T; Zhou, W

    2016-03-01

    The preparation and characterization of certified reference materials (CRMs) for radionuclide content in sediments collected offshore of Bikini Atoll (IAEA-410) and in the open northwest Pacific Ocean (IAEA-412) are described and the results of the certification process are presented. The certified radionuclides include: (40)K, (210)Pb ((210)Po), (226)Ra, (228)Ra, (228)Th, (232)Th, (234)U, (238)U, (239)Pu, (239+240)Pu and (241)Am for IAEA-410 and (40)K, (137)Cs, (210)Pb ((210)Po), (226)Ra, (228)Ra, (228)Th, (232)Th, (235)U, (238)U, (239)Pu, (240)Pu and (239+240)Pu for IAEA-412. The CRMs can be used for quality assurance and quality control purposes in the analysis of radionuclides in sediments, for development and validation of analytical methods and for staff training. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Ganglioside GT1b protects human spermatozoa from hydrogen peroxide-induced DNA and membrane damage.

    PubMed

    Gavella, Mirjana; Garaj-Vrhovac, Verica; Lipovac, Vaskresenija; Antica, Mariastefania; Gajski, Goran; Car, Nikica

    2010-06-01

    We have reported previously that various gangliosides, the sialic acid containing glycosphingolipids, provide protection against sperm injury caused by reactive oxygen species (ROS). In this study, we investigated the effect of treatment of human spermatozoa with ganglioside GT1b on hydrogen peroxide (H(2)O(2))-induced DNA fragmentation and plasma membrane damage. Single-cell gel electrophoresis (Comet assay) used in the assessment of sperm DNA integrity showed that in vitro supplemented GT1b (100 microm) significantly reduced DNA damage induced by H(2)O(2) (200 microm) (p < 0.05). Measurements of Annexin V binding in combination with the propidium iodide vital dye labelling demonstrated that the spermatozoa pre-treated with GT1b exhibited a significant increase (p < 0.05) in the percentage of live cells with intact membrane and decreased phosphatidylserine translocation after exposure to H(2)O(2). Flow cytometry using the intracellular ROS-sensitive fluorescence dichlorodihydrofluorescein diacetate dye employed to investigate the transport of the extracellularly supplied H(2)O(2) into the cell interior revealed that ganglioside GT1b completely inhibited the passage of H(2)O(2) through the sperm membrane. These results suggest that ganglioside GT1b may protect human spermatozoa from H(2)O(2)-induced damage by rendering sperm membrane more hydrophobic, thus inhibiting the diffusion of H(2)O(2) across the membrane.

  10. Turbomachinery design considerations for the nuclear HTGR-GT power plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, C.F.; Smith, M.J.

    1979-11-01

    For several years, design studies have been under way in the USA on a nuclear closed-cycle gas turbine plant (HTGR-GT). Design aspects of the helium turbomachine portion of these studies are presented. Gas dynamic and mechanical design considerations are presented for helium turbomachines in the 400-MW(e) (non-intercooled) and 600-MW(e) (intercooled) power range. Design of the turbomachine is a key element in the overall power plant program effort, which is currently directed toward the selection of a reference HTGR-GT commercial plant configuration for the US utility market. A conservative design approach has been emphasized to provide maximum safety and durability. Themore » studies presented for the integrated plant concept outline the necessary close working relationship between the reactor primary system and turbomachine designers.« less

  11. Observing Campaign for Potential Deep Impact Flyby Target 163249 (2002 GT)

    NASA Technical Reports Server (NTRS)

    Pittichova, Jana; Chesley, S. R.; Abell, P. A.; Benner, L. A. M.

    2012-01-01

    The Deep Impact spacecraft is currently on course for a Jan. 4, 2020 flyby of the sub-kilometer near-Earth asteroid 163249 (2002 GT). The re-targeting will be complete with a final small maneuver scheduled for Oct. 4, 2012. 2002 GT, which is also designated as a Potentially Hazardous Asteroid (PHA), has a well-determined orbit and is approx 800 m in diameter (H=18.3). Little more is known about the nature of this object, but in mid-2013 it will pass near the Earth, affording an exceptional opportunity for ground-based characterization. At this apparition 2002 GT will be in range of Arecibo. In addition to Doppler measurements, radar delay observations with precisions of a few microseconds are expected and have a good chance of revealing whether the system is binary or not. The asteroid will be brighter than 16th mag., which will facilitate a host of observations at a variety of wavelengths. Light curve measurements across a wide range of viewing perspectives will reveal the rotation rate and ultimately lead to strong constraints on the shape and pole orientation. Visible and infrared spectra will constrain the mineralogy, taxonomy, albedo and size. Along with the radar observations, optical astrometry will further constrain the orbit, both to facilitate terminal guidance operations and to potentially reveal nongravitational forces acting on the asteroid. Coordinating all of these observations will be a significant task and we encourage interested observers to collaborate in this effort. The 2013 apparition of 2002 GT represents a unique opportunity to characterize a potential flyby target, which will aid interpretation of the high-resolution flyby imagery and aid planning and development of the flyby imaging sequence. The knowledge gained from this flyby will be highly relevant to the human exploration program at NASA, which desires more information on the physical characteristics of sub-kilometer near-Earth asteroids.

  12. Asymmetric GT of social networks

    NASA Astrophysics Data System (ADS)

    Szu, Harold

    2010-04-01

    Web citation indexes are computed according to a data vector X collected from the frequency of user accesses, citations weighted by other sites' popularities, and modified by the financial sponsorship in a proprietary manner. The indexing determining the information to be retrieved by the public should be made responsible transparently in at least two ways. One shall balance the inbound linkages pointed at the specific i-th site called the popularity (see paper for equation) with the outbound linkages (see paper for equation) called the risk factor before the release of new information as environmental impact analysis. The relationship between these two factors cannot be assumed equivalent (undirected) as in the case of many mainstream Graph Theory (GT) models.

  13. IAEA support to medical physics in nuclear medicine.

    PubMed

    Meghzifene, Ahmed; Sgouros, George

    2013-05-01

    Through its programmatic efforts and its publications, the International Atomic Energy Agency (IAEA) has helped define the role and responsibilities of the nuclear medicine physicist in the practice of nuclear medicine. This paper describes the initiatives that the IAEA has undertaken to support medical physics in nuclear medicine. In 1984, the IAEA provided guidance on how to ensure that the equipment used for detecting, imaging, and quantifying radioactivity is functioning properly (Technical Document [TECDOC]-137, "Quality Control of Nuclear Medicine Instruments"). An updated version of IAEA-TECDOC-137 was issued in 1991 as IAEA-TECDOC-602, and this included new chapters on scanner-computer systems and single-photon emission computed tomography systems. Nuclear medicine physics was introduced as a part of a project on radiation imaging and radioactivity measurements in the 2002-2003 IAEA biennium program in Dosimetry and Medical Radiation Physics. Ten years later, IAEA activities in this field have expanded to cover quality assurance (QA) and quality control (QC) of nuclear medicine equipment, education and clinical training, professional recognition of the role of medical physicists in nuclear medicine physics, and finally, the coordination of research and development activities in internal dosimetry. As a result of these activities, the IAEA has received numerous requests to support the development and implementation of QA or QC programs for radioactivity measurements in nuclear medicine in many Member States. During the last 5 years, support was provided to 20 Member States through the IAEA's technical cooperation programme. The IAEA has also supported education and clinical training of medical physicists. This type of support has been essential for the development and expansion of the Medical Physics profession, especially in low- and middle-income countries. The need for basic as well as specialized clinical training in medical physics was identified as a

  14. Gemini Program Mission Report for Gemini-Titan 1 (GT-1)

    NASA Technical Reports Server (NTRS)

    1964-01-01

    The Gemini-Titan 1 (GT-1) space vehicle was comprised of the Gemini spacecraft and the Gemini launch vehicle. The Gemini launch vehicle is a two-stage modified Titan II ICBM. The major modifications are the addition of a malfunction detection system and a secondary flight controls system. The Gemini spacecraft, designed to carry a crew of two men on earth orbital and rendezvous missions, was unmanned for the flight reported herein (GT-1). There were no complete Gemini flight systems on board; however, the C-band transponder and telemetry transmitters were Gemini flight subsystems. Dummy equipment, having a mass and moment of inertia equal to flight system equipment, was installed in the spacecraft. The Spacecraft was instrumented to obtain data on spacecraft heating, structural loading, vibration, sound pressure levels, and temperature and pressure during the launch phase.

  15. RECRUITMENT OF U.S. CITIZENS FOR VACANCIES IN IAEA SAFEGUARDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PEPPER,S.E.; DECARO,D.; WILLIAMS,G.

    The International Atomic Energy Agency (IAEA) relies on its member states to assist with recruiting qualified individuals for positions within the IAEA's secretariat. It is important that persons within and outside the US nuclear and safeguards industries become aware of career opportunities available at the IAEA, and informed about important vacancies. The IAEA has established an impressive web page to advertise opportunities for employment. However, additional effort is necessary to ensure that there is sufficient awareness in the US of these opportunities, and assistance for persons interested in taking positions at the IAEA. In 1998, the Subgroup on Safeguards Technicalmore » Support (SSTS) approved a special task under the US Support Program to IAEA Safeguards (USSP) for improving US efforts to identify qualified candidates for vacancies in IAEA's Department of Safeguards. The International Safeguards Project Office (ISPO) developed a plan that includes increased advertising, development of a web page to support US recruitment efforts, feedback from the US Mission in Vienna, and interaction with other recruitment services provided by US professional organizations. The main purpose of this effort is to educate US citizens about opportunities at the IAEA so that qualified candidates can be identified for the IAEA's consideration.« less

  16. The National Research Center on the Gifted and Talented (NRC/GT) Newsletter, June 1991-Winter 1997.

    ERIC Educational Resources Information Center

    Gubbins, E. Jean, Ed.; Siegle, Del L., Ed.

    1997-01-01

    These 15 newsletters from the National Research Center on the Gifted and Talented (NRC/GT) contain the following articles: (1) "National Research Needs Assessment Process" (Brian D. Reid); (2) "NRC/GT: Update of Year 2 Activities" (E. Jean Gubbins); (3) "Parents: Their Impact on Gifted Adolescents" (Julie L. Sherman);…

  17. Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in themore » neutron field are reported.« less

  18. 10 CFR 75.12 - Communication of information to IAEA.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Communication of information to IAEA. 75.12 Section 75.12 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) SAFEGUARDS ON NUCLEAR MATERIAL-IMPLEMENTATION OF US/IAEA AGREEMENT Facility and Location Information § 75.12 Communication of information to IAEA. (a) Except as...

  19. 10 CFR 75.12 - Communication of information to IAEA.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Communication of information to IAEA. 75.12 Section 75.12 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) SAFEGUARDS ON NUCLEAR MATERIAL-IMPLEMENTATION OF US/IAEA AGREEMENT Facility and Location Information § 75.12 Communication of information to IAEA. (a) Except as...

  20. Optimizing the G/T ratio of the DSS-13 34-meter beam-waveguide antenna

    NASA Technical Reports Server (NTRS)

    Esquivel, M. S.

    1992-01-01

    Calculations using Physical Optics computer software were done to optimize the gain-to-noise temperature (G/T) ratio of DSS-13, the DSN's 34-m beam-waveguide antenna, at X-band for operation with the ultra-low-noise amplifier maser system. A better G/T value was obtained by using a 24.2-dB far-field-gain smooth-wall dual-mode horn than by using the standard X-band 22.5-dB-gain corrugated horn.

  1. ASTRONAUT EDWARD H. WHITE II - GEMINI-TITAN (GT)-IV - ZERO GRAVITY - OUTER SPACE

    NASA Image and Video Library

    2015-03-20

    S65-30427 (3 June 1965) --- Astronaut Edward H. White II, pilot for the Gemini-Titan 4 (GT-4) spaceflight, floats in the zero-gravity of space during the third revolution of the GT-4 spacecraft. White wears a specially designed spacesuit. His face is shaded by a gold-plated visor to protect him from unfiltered rays of the sun. In his right hand he carries a Hand-Held Self-Maneuvering Unit (HHSMU) that gives him control over his movements in space. White also wears an emergency oxygen chest pack; and he carries a camera mounted on the HHSMU for taking pictures of the sky, Earth and the GT-4 spacecraft. He is secured to the spacecraft by a 25-feet umbilical line and a 23-feet tether line. Both lines are wrapped together in gold tape to form one cord. Astronaut James A. McDivitt, command pilot, remained inside the spacecraft during the extravehicular activity (EVA). Photo credit: NASA EDITOR'S NOTE: Astronaut Edward H. White II died in the Apollo/Saturn 204 fire at Cape Kennedy on Jan. 27, 1967.

  2. USSP-IAEA WORKSHOP ON ADVANCED SENSORS FOR SAFEGUARDS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PEPPER,S.; QUEIROLO, A.; ZENDEL, M.

    2007-11-13

    The IAEA Medium Term Strategy (2006-2011) defines a number of specific goals in respect to the IAEA's ability to provide assurances to the international community regarding the peaceful use of nuclear energy through States adherences to their respective non-proliferation treaty commitments. The IAEA has long used and still needs the best possible sensors to detect and measure nuclear material. The Department of Safeguards, recognizing the importance of safeguards-oriented R&D, especially targeting improved detection capabilities for undeclared facilities, materials and activities, initiated a number of activities in early 2005. The initiatives included letters to Member State Support Programs (MSSPs), personal contactsmore » with known technology holders, topical meetings, consultant reviews of safeguards technology, and special workshops to identify new and novel technologies and methodologies. In support of this objective, the United States Support Program to IAEA Safeguards hosted a workshop on ''Advanced Sensors for Safeguards'' in Santa Fe, New Mexico, from April 23-27, 2007. The Organizational Analysis Corporation, a U.S.-based management consulting firm, organized and facilitated the workshop. The workshop's goal was to help the IAEA identify and plan for new sensors for safeguards implementation. The workshop, which was attended by representatives of seven member states and international organizations, included presentations by technology holders and developers on new technologies thought to have relevance to international safeguards, but not yet in use by the IAEA. The presentations were followed by facilitated breakout sessions where the participants considered two scenarios typical of what IAEA inspectors might face in the field. One scenario focused on an enrichment plant; the other scenario focused on a research reactor. The participants brainstormed using the technologies presented by the participants and other technologies known to them to propose

  3. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  4. Optimizing the G/T ratio of the DSS-13 34-meter beam-waveguide antenna

    NASA Technical Reports Server (NTRS)

    Esquivel, M. S.

    1992-01-01

    Calculations using Physical Optics computer software were done to optimize the gain-to-noise-temperature (G/T) ratio of Deep Space Station (DSS)-13, the Deep Space Network's (DSN's) 34-m beam-waveguide antenna, at X-band for operation with the ultra-low-noise amplifier maser system. A better G/T value was obtained by using a 24.2-dB far-field-gain smooth-wall dual-mode horn than by using the standard X-band 22.5-dB-gain corrugated horn.

  5. Improving Quality and Access to Radiation Therapy-An IAEA Perspective.

    PubMed

    Abdel-Wahab, May; Zubizarreta, Eduardo; Polo, Alfredo; Meghzifene, Ahmed

    2017-04-01

    The International Atomic Energy Agency (IAEA) has been involved in radiation therapy since soon after its creation in 1957. In response to the demands of Member States, the IAEA׳s activities relating to radiation therapy have focused on supporting low- and middle-income countries to set up radiation therapy facilities, expand the scope of treatments, or gradually transition to new technologies. In addition, the IAEA has been very active in providing internationally harmonized guidelines on clinical, dosimetry, medical physics, and safety aspects of radiation therapy. IAEA clinical research has provided evidence for treatment improvement as well as highly effective resource-sparing interventions. In the process, training of researchers occurs through this program. To provide this support, the IAEA works with its Member States and multiple partners worldwide through several mechanisms. In this article, we review the main activities conducted by the IAEA in support to radiation therapy. IAEA support has been crucial for achieving tangible results in many low- and middle-income countries. However, long-term sustainability of projects can present a challenge, especially when considering health budget constraints and the brain drain of skilled professionals. The need for support remains, with more than 90% of patients in low-income countries lacking access to radiotherapy. Thus, the IAEA is expected to continue its support and strengthen quality radiation therapy treatment of patients with cancer. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  7. Comparison of older adults' steps per day using NL-1000 pedometer and two GT3X+ accelerometer filters.

    PubMed

    Barreira, Tiago V; Brouillette, Robert M; Foil, Heather C; Keller, Jeffrey N; Tudor-Locke, Catrine

    2013-10-01

    The purpose of this study was to compare the steps/d derived from the ActiGraph GT3X+ using the manufacturer's default filter (DF) and low-frequency-extension filter (LFX) with those from the NL-1000 pedometer in an older adult sample. Fifteen older adults (61-82 yr) wore a GT3X+ (24 hr/day) and an NL-1000 (waking hours) for 7 d. Day was the unit of analysis (n = 86 valid days) comparing (a) GT3X+ DF and NL-1000 steps/d and (b) GT3X+ LFX and NL-1000 steps/d. DF was highly correlated with NL-1000 (r = .80), but there was a significant mean difference (-769 steps/d). LFX and NL-1000 were highly correlated (r = .90), but there also was a significant mean difference (8,140 steps/d). Percent difference and absolute percent difference between DF and NL-1000 were -7.4% and 16.0%, respectively, and for LFX and NL-1000 both were 121.9%. Regardless of filter used, GT3X+ did not provide comparable pedometer estimates of steps/d in this older adult sample.

  8. Conformational analysis of GT1B ganglioside and its interaction with botulinum neurotoxin type B: a study by molecular modeling and molecular dynamics.

    PubMed

    Venkateshwari, Sureshkumar; Veluraja, Kasinadar

    2012-01-01

    The conformational property of oligosaccharide GT1B in aqueous environment was studied by molecular dynamics (MD) simulation using all-atom model. Based on the trajectory analysis, three prominent conformational models were proposed for GT1B. Direct and water-mediated hydrogen bonding interactions stabilize these structures. The molecular modeling and 15 ns MD simulation of the Botulinum Neuro Toxin/B (BoNT/B) - GT1B complex revealed that BoNT/B can accommodate the GT1B in the single binding mode. Least mobility was seen for oligo-GT1B in the binding pocket. The bound conformation of GT1B obtained from the MD simulation of the BoNT/B-GT1B complex bear a close conformational similarity with the crystal structure of BoNT/A-GT1B complex. The mobility noticed for Arg 1268 in the dynamics was accounted for its favorable interaction with terminal NeuNAc. The internal NeuNAc1 tends to form 10 hydrogen bonds with BoNT/B, hence specifying this particular site as a crucial space for the therapeutic design that can restrict the pathogenic activity of BoNT/B.

  9. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.

    1991-01-01

    A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  10. Designing Ground Antennas for Maximum G/T: Cassegrain or Gregorian?

    NASA Technical Reports Server (NTRS)

    Imbriale, William A.

    2005-01-01

    For optimum performance, a ground antenna system must maximize the ratio of received signal to the receiving system noise power, defined as the ratio of antenna gain to system-noise temperature (G/T). The total system noise temperature is the linear combination of the receiver noise temperature (including the feed system losses) and the antenna noise contribution. Hence, for very low noise cryogenic receiver systems, antenna noise-temperature properties are very significant contributors to G/T.It is well known that, for dual reflector systems designed for maximum gain, the gain performance of the antenna system is the same for both Cassegrain and Gregorian configurations. For a12-meter antenna designed to be part of the large array based Deep Space Network, a Cassegrain configuration designed for maximum G/T at X-band was 0.7 dB higher than the equivalent Gregorian configuration. This study demonstrates that, for maximum GIT, the dual shaped Cassegrain design is always better than the Gregorian.

  11. GT-WGS: an efficient and economic tool for large-scale WGS analyses based on the AWS cloud service.

    PubMed

    Wang, Yiqi; Li, Gen; Ma, Mark; He, Fazhong; Song, Zhuo; Zhang, Wei; Wu, Chengkun

    2018-01-19

    Whole-genome sequencing (WGS) plays an increasingly important role in clinical practice and public health. Due to the big data size, WGS data analysis is usually compute-intensive and IO-intensive. Currently it usually takes 30 to 40 h to finish a 50× WGS analysis task, which is far from the ideal speed required by the industry. Furthermore, the high-end infrastructure required by WGS computing is costly in terms of time and money. In this paper, we aim to improve the time efficiency of WGS analysis and minimize the cost by elastic cloud computing. We developed a distributed system, GT-WGS, for large-scale WGS analyses utilizing the Amazon Web Services (AWS). Our system won the first prize on the Wind and Cloud challenge held by Genomics and Cloud Technology Alliance conference (GCTA) committee. The system makes full use of the dynamic pricing mechanism of AWS. We evaluate the performance of GT-WGS with a 55× WGS dataset (400GB fastq) provided by the GCTA 2017 competition. In the best case, it only took 18.4 min to finish the analysis and the AWS cost of the whole process is only 16.5 US dollars. The accuracy of GT-WGS is 99.9% consistent with that of the Genome Analysis Toolkit (GATK) best practice. We also evaluated the performance of GT-WGS performance on a real-world dataset provided by the XiangYa hospital, which consists of 5× whole-genome dataset with 500 samples, and on average GT-WGS managed to finish one 5× WGS analysis task in 2.4 min at a cost of $3.6. WGS is already playing an important role in guiding therapeutic intervention. However, its application is limited by the time cost and computing cost. GT-WGS excelled as an efficient and affordable WGS analyses tool to address this problem. The demo video and supplementary materials of GT-WGS can be accessed at https://github.com/Genetalks/wgs_analysis_demo .

  12. Benchmarking in emergency health systems.

    PubMed

    Kennedy, Marcus P; Allen, Jacqueline; Allen, Greg

    2002-12-01

    This paper discusses the role of benchmarking as a component of quality management. It describes the historical background of benchmarking, its competitive origin and the requirement in today's health environment for a more collaborative approach. The classical 'functional and generic' types of benchmarking are discussed with a suggestion to adopt a different terminology that describes the purpose and practicalities of benchmarking. Benchmarking is not without risks. The consequence of inappropriate focus and the need for a balanced overview of process is explored. The competition that is intrinsic to benchmarking is questioned and the negative impact it may have on improvement strategies in poorly performing organizations is recognized. The difficulty in achieving cross-organizational validity in benchmarking is emphasized, as is the need to scrutinize benchmarking measures. The cost effectiveness of benchmarking projects is questioned and the concept of 'best value, best practice' in an environment of fixed resources is examined.

  13. IAEA Sampling Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, William H.

    2017-09-15

    The objectives for this presentation are to describe the method that the IAEA uses to determine a sampling plan for nuclear material measurements; describe the terms detection probability and significant quantity; list the three nuclear materials measurement types; describe the sampling method applied to an item facility; and describe multiple method sampling.

  14. Structural Characterization of the Foliated-Layered Gabbro Transition in Wadi Tayin of the Samail Ophiolite, Oman; Oman Drilling Project Holes GT1A and GT2A

    NASA Astrophysics Data System (ADS)

    Deans, J. R.; Crispini, L.; Cheadle, M. J.; Harris, M.; Kelemen, P. B.; Teagle, D. A. H.; Matter, J. M.; Takazawa, E.; Coggon, J. A.

    2017-12-01

    Oman Drilling Project Holes GT1A and GT2A were drilled into the Wadi Tayin massif, Samail ophiolite and both recovered ca. 400 m of continuous core through a section of the layered gabbros and the foliated-layered gabbro transition. Hole GT1A is cut by a discrete fault system including localized thin ultracataclastic fault zones. Hole GT2A is cut by a wider zone of brittle deformation and incipient brecciation. Here we report the structural history of the gabbros reflecting formation at the ridge to later obduction. Magmatic and high temperature history- 1) Both cores exhibit a pervasive, commonly well-defined magmatic foliation delineated by plagioclase, olivine and in places clinopyroxene. Minor magmatic deformation is present. 2) The dip of the magmatic foliation varies cyclically, gradually changing dip by 30o from gentle to moderate over a 50 m wavelength. 3) Layering is present throughout both cores, is defined by changes in mode and grain size ranging in thickness from 2 cm to 3 m and is commonly sub-parallel to the foliation. 4) There are no high temperature crystal-plastic shear zones in the core. Key observations include: no simple, systematic shallowing of dip with depth across the foliated-layered gabbro transition and layering is continuous across this transition. Cyclic variation of magmatic foliation dip most likely reflects the process of plate separation at the ridge axis. Near-axis faulting- i) On or near-axis structures consist of epidote-amphibole bearing hydraulic breccias and some zones of intense cataclasis with intensely deformed epidote and seams of clay and chlorite accompanied by syntectonic alteration of the wall rock. Early veins are filled with amphibole, chlorite, epidote, and anhydrite. ii) The deformation ranges from brittle-ductile, causing local deflection of the magmatic foliation, to brittle offset of the foliation and core and mantle structures in anhydrite veins. iii) The prevalent sense of shear is normal and slickenfibers

  15. Benchmarking reference services: step by step.

    PubMed

    Buchanan, H S; Marshall, J G

    1996-01-01

    This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.

  16. Selective propagation of mouse-passaged scrapie prions with long incubation period from a mixed prion population using GT1-7 cells.

    PubMed

    Miyazawa, Kohtaro; Masujin, Kentaro; Okada, Hiroyuki; Ushiki-Kaku, Yuko; Matsuura, Yuichi; Yokoyama, Takashi

    2017-01-01

    In our previous study, we demonstrated the propagation of mouse-passaged scrapie isolates with long incubation periods (L-type) derived from natural Japanese sheep scrapie cases in murine hypothalamic GT1-7 cells, along with disease-associated prion protein (PrPSc) accumulation. We here analyzed the susceptibility of GT1-7 cells to scrapie prions by exposure to infected mouse brains at different passages, following interspecies transmission. Wild-type mice challenged with a natural sheep scrapie case (Kanagawa) exhibited heterogeneity of transmitted scrapie prions in early passages, and this mixed population converged upon one with a short incubation period (S-type) following subsequent passages. However, when GT1-7 cells were challenged with these heterologous samples, L-type prions became dominant. This study demonstrated that the susceptibility of GT1-7 cells to L-type prions was at least 105 times higher than that to S-type prions and that L-type prion-specific biological characteristics remained unchanged after serial passages in GT1-7 cells. This suggests that a GT1-7 cell culture model would be more useful for the economical and stable amplification of L-type prions at the laboratory level. Furthermore, this cell culture model might be used to selectively propagate L-type scrapie prions from a mixed prion population.

  17. Overview of Hole GT2A: Drilling middle gabbro in Wadi Tayin massif, Oman ophiolite

    NASA Astrophysics Data System (ADS)

    Takazawa, E.; Kelemen, P. B.; Teagle, D. A. H.; Coggon, J. A.; Harris, M.; Matter, J. M.; Michibayashi, K.

    2017-12-01

    Hole GT2A (UTM: 40Q 655960.7E / 2529193.5N) was drilled by the Oman Drilling Project (OmDP) into Wadi Gideah of Wadi Tayin massif in the Samail ophiolite, Oman. OmDP is an international collaboration supported by the International Continental Scientific Drilling Program, the Deep Carbon Observatory, NSF, IODP, JAMSTEC, and the European, Japanese, German and Swiss Science Foundations, with in-kind support in Oman from the Ministry of Regional Municipalities and Water Resources, Public Authority of Mining, Sultan Qaboos University, and the German University of Technology. Hole GT2A was diamond cored in 25 Dec 2016 to 18 Jan 2017 to a total depth of 406.77 m. The outer surfaces of the cores were imaged and described on site before being curated, boxed and shipped to the IODP drill ship Chikyu, where they underwent comprehensive visual and instrumental analysis. 33 shipboard scientists were divided into six teams (Igneous, Alteration, Structural, Geochem, Physical Properties, Paleomag) to describe and analyze the cores. Hole GT2A drilled through the transition between foliated and layered gabbro. The transition zone occurs between 50 and 150 m curation corrected depth (CCD). The top 50 m of Hole GT2A is foliated gabbro whereas the bottom 250 m consists of layered gabbro. Brittle fracture is observed throughout the core. Intensity of alteration vein decreases from the top to the bottom of the hole. On the basis of changes in grain size and/or modal abundance and/or appearance/disappearance of igneous primary mineral(s) five lithological units are defined in Hole GT2A (Unit I to V). The uppermost part of Hole GT2A (Unit I) is dominated by fine-grained granular olivine gabbro intercalated with less dominant medium-grained granular olivine gabbro and rare coarse-grained varitextured gabbro. The lower part of the Hole (Units II, III and V) is dominated by medium-grained olivine gabbro, olivine melagabbro and olivine-bearing gabbro. Modally-graded rhythmic layering with

  18. Structural characterization of O- and C-glycosylating variants of the landomycin glycosyltransferase LanGT2.

    PubMed

    Tam, Heng Keat; Härle, Johannes; Gerhardt, Stefan; Rohr, Jürgen; Wang, Guojun; Thorson, Jon S; Bigot, Aurélien; Lutterbeck, Monika; Seiche, Wolfgang; Breit, Bernhard; Bechthold, Andreas; Einsle, Oliver

    2015-02-23

    The structures of the O-glycosyltransferase LanGT2 and the engineered, C-C bond-forming variant LanGT2S8Ac show how the replacement of a single loop can change the functionality of the enzyme. Crystal structures of the enzymes in complex with a nonhydrolyzable nucleotide-sugar analogue revealed that there is a conformational transition to create the binding sites for the aglycon substrate. This induced-fit transition was explored by molecular docking experiments with various aglycon substrates. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Neutron Reference Benchmark Field Specifications: ACRR Polyethylene-Lead-Graphite (PLG) Bucket Environment (ACRR-PLG-CC-32-CL).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parm, Edward J.; Griffin, Patrick J.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the Polyethylene-Lead-Graphite (PLG) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 37 integralmore » dosimetry measurements in the neutron field are reported.« less

  20. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  1. A reconstructed computerized tomographic comparison of Ni-Ti rotary GT files versus traditional instruments in canals shaped by novice operators.

    PubMed

    Gluskin, A H; Brown, D C; Buchanan, L S

    2001-09-01

    The aim of this study was to compare the effects of preparation with conventional stainless steel Flexofiles and Gates Glidden burs versus nickel-titanium GT rotary files in the shaping of mesial root canals of extracted mandibular molars. A total of 54 canals from 27 mesial roots of mandibular molar teeth were prepared using one of two methods by novice dental students. One canal in each root was prepared by a crown-down approach. utilizing stainless steel Flexofiles and Gates Glidden burs. The other canal was prepared using nickel-titanium GT rotary files in a crown-down fashion as recommended by the manufacturer. Preoperative CT scans of each root were recorded and 50 canal specimens were available for postoperative comparisons. Following canal shaping, postoperative scans were superimposed on the original images. Changes in canal area, canal transportation and thickness of remaining root structure at strategic levels of the root were analyzed. The time taken for each method was also noted. At the coronal and mid-root coronal one-third sections, the rotary GT files produced a significantly smaller postoperative canal area (P < 0.05). In the mid-root sections there was significantly less transportation of the root canal toward the furcation, and less thinning of the root structure with GT files compared to the stainless steel files (P < 0.05). Overall, there was greater conservation of structure coronally and more adequate shape in the mid-root level. The GT rotary technique was significantly faster than the stainless steel hand-held file technique (P < 0.0001). Two GT instruments fractured during the study. Under the conditions of this study, novice dental students were able to prepare curved root canals with Ni-Ti GT rotary files with less transportation and greater conservation of tooth structure, compared to canals prepared with hand instruments. The rotary technique was significantly faster.

  2. Benchmarks for target tracking

    NASA Astrophysics Data System (ADS)

    Dunham, Darin T.; West, Philip D.

    2011-09-01

    The term benchmark originates from the chiseled horizontal marks that surveyors made, into which an angle-iron could be placed to bracket ("bench") a leveling rod, thus ensuring that the leveling rod can be repositioned in exactly the same place in the future. A benchmark in computer terms is the result of running a computer program, or a set of programs, in order to assess the relative performance of an object by running a number of standard tests and trials against it. This paper will discuss the history of simulation benchmarks that are being used by multiple branches of the military and agencies of the US government. These benchmarks range from missile defense applications to chemical biological situations. Typically, a benchmark is used with Monte Carlo runs in order to tease out how algorithms deal with variability and the range of possible inputs. We will also describe problems that can be solved by a benchmark.

  3. GT200 getting better than 34% efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farmer, R.

    1980-01-01

    Design features are described for the GT200, a 50-Hz machine blend of high temperature advanced aircraft rotating components and heavy frame industrial gas turbine structure. It includes a twin spool as generator with a two-stage power turbine giving nominal performance of 85,000 kW ISO peak output with a 10,120 Btu per kW-h heat rate on LHV distillate. It is desgined for base, intermediate, or peak load operation simple or combined cycle. Stal-Laval in Sweden developed it and sold the first unit to the Swedish State Power Board in July 1977. The unit was installed at the Stallbocka Station.

  4. Selective propagation of mouse-passaged scrapie prions with long incubation period from a mixed prion population using GT1-7 cells

    PubMed Central

    Masujin, Kentaro; Okada, Hiroyuki; Ushiki-Kaku, Yuko; Matsuura, Yuichi; Yokoyama, Takashi

    2017-01-01

    In our previous study, we demonstrated the propagation of mouse-passaged scrapie isolates with long incubation periods (L-type) derived from natural Japanese sheep scrapie cases in murine hypothalamic GT1-7 cells, along with disease-associated prion protein (PrPSc) accumulation. We here analyzed the susceptibility of GT1-7 cells to scrapie prions by exposure to infected mouse brains at different passages, following interspecies transmission. Wild-type mice challenged with a natural sheep scrapie case (Kanagawa) exhibited heterogeneity of transmitted scrapie prions in early passages, and this mixed population converged upon one with a short incubation period (S-type) following subsequent passages. However, when GT1-7 cells were challenged with these heterologous samples, L-type prions became dominant. This study demonstrated that the susceptibility of GT1-7 cells to L-type prions was at least 105 times higher than that to S-type prions and that L-type prion-specific biological characteristics remained unchanged after serial passages in GT1-7 cells. This suggests that a GT1-7 cell culture model would be more useful for the economical and stable amplification of L-type prions at the laboratory level. Furthermore, this cell culture model might be used to selectively propagate L-type scrapie prions from a mixed prion population. PMID:28636656

  5. Benchmarking and the laboratory

    PubMed Central

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  6. Benchmarking for Higher Education.

    ERIC Educational Resources Information Center

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  7. The evaluation of MCI, MI, PMI and GT on both genders with different age and dental status.

    PubMed

    Bozdag, G; Sener, S

    2015-01-01

    The aim of this study was to measure the mandibular cortical index (MCI), mental index (MI), panoramic mandibular index (PMI) and cortical bone thickness in the zone of the gonial angle (GT) in panoramic radiographies from a large sample of males and females and to determine how they relate to patients' age, gender and dental status. 910 panoramic radiographs were obtained and grouped into age, dental status and gender. The MCI, MI, PMI and GT were analysed. Remarkable differences were observed for MCI and GT regarding gender, age groups and dental status on both sides (p < 0.05). While age and dental status had an effect on the MI and PMI in females, dental status had an effect on the MI and PMI in males (p < 0.05). Also, gender had an effect on the MI and PMI (p < 0.05). The effects of age and tooth loss are different in females and males. In females, the harmful effects of tooth loss and age are more prominent according to the PMI and MI measurements. The effects of age and tooth loss in the GT and MCI measurements are similar, and these indices can be accepted as more reliable in studies including both genders.

  8. The evaluation of MCI, MI, PMI and GT on both genders with different age and dental status

    PubMed Central

    Sener, S

    2015-01-01

    Objectives: The aim of this study was to measure the mandibular cortical index (MCI), mental index (MI), panoramic mandibular index (PMI) and cortical bone thickness in the zone of the gonial angle (GT) in panoramic radiographies from a large sample of males and females and to determine how they relate to patients' age, gender and dental status. Methods: 910 panoramic radiographs were obtained and grouped into age, dental status and gender. The MCI, MI, PMI and GT were analysed. Results: Remarkable differences were observed for MCI and GT regarding gender, age groups and dental status on both sides (p < 0.05). While age and dental status had an effect on the MI and PMI in females, dental status had an effect on the MI and PMI in males (p < 0.05). Also, gender had an effect on the MI and PMI (p < 0.05). Conclusions: The effects of age and tooth loss are different in females and males. In females, the harmful effects of tooth loss and age are more prominent according to the PMI and MI measurements. The effects of age and tooth loss in the GT and MCI measurements are similar, and these indices can be accepted as more reliable in studies including both genders. PMID:26133366

  9. Overview of TPC Benchmark E: The Next Generation of OLTP Benchmarks

    NASA Astrophysics Data System (ADS)

    Hogan, Trish

    Set to replace the aging TPC-C, the TPC Benchmark E is the next generation OLTP benchmark, which more accurately models client database usage. TPC-E addresses the shortcomings of TPC-C. It has a much more complex workload, requires the use of RAID-protected storage, generates much less I/O, and is much cheaper and easier to set up, run, and audit. After a period of overlap, it is expected that TPC-E will become the de facto OLTP benchmark.

  10. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thrower, A.W.; Patric, J.; Keister, M.

    2008-07-01

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how thesemore » findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast

  11. Correlation between the NPPB gene promoter c.-1298 G/T polymorphism site and pulse pressure in the Chinese Han population.

    PubMed

    Zeng, K; Wu, X D; Cai, H D; Gao, Y G; Li, G; Liu, Q C; Gao, F; Chen, J H; Lin, C Z

    2014-04-29

    The aim of this study was to investigate the correlation between the natriuretic peptide precursor B (NPPB) gene single nucleotide polymorphism (SNP) c.-1298 G/T and pulse pressure (PP) of the Chinese Han population and the association between genotype and clinical indicators of hypertension. Peripheral blood was collected from 180 unrelated patients with hypertension and 540 healthy volunteers (control group), and DNA was extracted to amplify the 5'-flanking region and 2 exons of the NPPB gene by polymerase chain reaction; the fragment was sequenced after purification. The clinical data of all subjects were recorded, the distribution of the NPPB gene c.-1298 G/T polymorphism was determined, and differences in clinical indicators between the two groups were evaluated. The mean arterial pressure PP, and creatinine levels were significantly higher in the hypertension group than in the control group (P<0.05), but no other clinical indicators differed between the groups. There were no significant differences in genotype frequency and distribution of the NPPB gene c.-1298 G/T polymorphism between the hypertension group and the control group (P>0.05); in the control group, the mean PP of individuals with the SNP c.-1298 GG genotype was greater than that of individuals with the GT+TT genotype (P<0.05). In conclusion, there was no significant correlation between the NPPB gene c.-1298 G/T polymorphism and the incidence of essential hypertension in the Han population; however, the PP of the SNP c.-1298 GG genotype was greater than that of the GT+TT genotype in the control group.

  12. Benchmarking reference services: an introduction.

    PubMed

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  13. Benchmarking in Academic Pharmacy Departments

    PubMed Central

    Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O.; Ross, Leigh Ann

    2010-01-01

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation. PMID:21179251

  14. Benchmarking in academic pharmacy departments.

    PubMed

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  15. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  17. FireHose Streaming Benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created inmore » the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.« less

  18. International Scavenging for First Responder Guidance and Tools: IAEA Products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stern, W.; Berthelot, L.; Bachner, K.

    In fiscal years (FY) 2016 and 2017, with support from the U.S. Department of Homeland Security (DHS), Brookhaven National Laboratory (BNL) examined the International Atomic Energy Agency (IAEA) radiological emergency response and preparedness products (guidance and tools) to determine which of these products could be useful to U.S. first responders. The IAEA Incident and Emergency Centre (IEC), which is responsible for emergency preparedness and response, offers a range of tools and guidance documents for responders in recognizing, responding to, and recovering from radiation emergencies and incidents. In order to implement this project, BNL obtained all potentially relevant tools and productsmore » produced by the IAEA IEC and analyzed these materials to determine their relevance to first responders in the U.S. Subsequently, BNL organized and hosted a workshop at DHS National Urban Security Technology Laboratory (NUSTL) for U.S. first responders to examine and evaluate IAEA products to consider their applicability to the United States. This report documents and describes the First Responder Product Evaluation Workshop, and provides recommendations on potential steps the U.S. federal government could take to make IAEA guidance and tools useful to U.S. responders.« less

  19. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    PubMed

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  20. MA130301GT catalogue of Martian impact craters and advanced evaluation of crater detection algorithms using diverse topography and image datasets

    NASA Astrophysics Data System (ADS)

    Salamunićcar, Goran; Lončarić, Sven; Pina, Pedro; Bandeira, Lourenço; Saraiva, José

    2011-01-01

    Recently, all the craters from the major currently available manually assembled catalogues have been merged into the catalogue with 57 633 known Martian impact craters (MA57633GT). In addition, the work on crater detection algorithm (CDA), developed to search for still uncatalogued impact craters using 1/128° MOLA data, resulted in MA115225GT. In parallel with this work another CDA has been developed which resulted in the Stepinski catalogue containing 75 919 craters (MA75919T). The new MA130301GT catalogue presented in this paper is the result of: (1) overall merger of MA115225GT and MA75919T; (2) 2042 additional craters found using Shen-Castan based CDA from the previous work and 1/128° MOLA data; and (3) 3129 additional craters found using CDA for optical images from the previous work and selected regions of 1/256° MDIM, 1/256° THEMIS-DIR, and 1/256° MOC datasets. All craters from MA130301GT are manually aligned with all used datasets. For all the craters that originate from the used catalogues (Barlow, Rodionova, Boyce, Kuzmin, Stepinski) we integrated all the attributes available in these catalogues. With such an approach MA130301GT provides everything that was included in these catalogues, plus: (1) the correlation between various morphological descriptors from used catalogues; (2) the correlation between manually assigned attributes and automated depth/diameter measurements from MA75919T and our CDA; (3) surface dating which has been improved in resolution globally; (4) average errors and their standard deviations for manually and automatically assigned attributes such as position coordinates, diameter, depth/diameter ratio, etc.; and (5) positional accuracy of features in the used datasets according to the defined coordinate system referred to as MDIM 2.1, which incorporates 1232 globally distributed ground control points, while our catalogue contains 130 301 cross-references between each of the used datasets. Global completeness of MA130301GT is up to

  1. Clinical Utilisation and Usefullness of the Rating Scale of Mixed States, ("Gt-Msrs"): a Multicenter Study.

    PubMed

    Tavormina, Giuseppe; Franza, Francesco; Stranieri, Giuseppe; Juli, Luigi; Juli, Maria Rosaria

    2017-09-01

    The rating scale "G.T. MSRS" has been designed to improve the clinical effectiveness of the clinician psychiatrists, by enabling them to make an early "general" diagnosis of mixed states. The knowledge of the clinical features of the mixed states and of the symptoms of the "mixity" of mood disorders is crucial: to mis-diagnose or mis-treat patients with these symptoms may increase the suicide risk and make worse the evolution of mood disorders going to the dysphoric state. This study is the second validation study of the "G.T. MSRS" rating scale, in order to demonstrate its usefullness.

  2. Clinical utilisation of the "G.T. MSRS", the rating scale for mixed states: 35 cases report.

    PubMed

    Tavormina, Giuseppe

    2015-09-01

    The knowledge of the clinical features of the mixed states and of the symptoms of the "mixity" of mood disorders is crucial: to mis-diagnose or mis-treat patients with these symptoms may increase the suicide risk and make worse the evolution of mood disorders. The rating scale "G.T. MSRS" has been designed to improve the clinical effectiveness of both psychiatrists and GPs by enabling them to make an early "general" diagnosis of mixed states. This study presents some cases in which the "G.T. MSRS" scale has been used, in order to demonstrate its usefullness.

  3. Benchmarking Using Basic DBMS Operations

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  4. Benchmarking Tool Kit.

    ERIC Educational Resources Information Center

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  5. A novel Tetra-primer ARMS-PCR based assay for genotyping SNP rs12303764(G/T) of human Unc-51 like kinase 1 gene.

    PubMed

    Randhawa, Rohit; Duseja, Ajay; Changotra, Harish

    2017-02-01

    Various case-control studies have shown association of single nucleotide polymorphism rs12303764(G/T) in ULK1 with crohn's disease. The techniques used in these studies were time consuming, complicated and require sophisticated/expensive instruments. Therefore, in order to overcome these problems, we have developed a new, rapid and cost effective Tetra-primer ARMS-PCR assay to genotype single nucleotide polymorphism rs12303764(G/T) of ULK1 gene. We manually designed allele specific primers. DNA fragment amplified using outer primers was sequenced to obtain samples with known genotypes (GG, GT and TT) for further use in the development of T-ARMS-PCR assay. Amplification conditions were optimized for parameters; annealing temperature, Taq DNA polymerase and primers. The developed T-ARMS-PCR assay was applied to genotype one hundred samples from healthy individuals. Genotyping results of 10 DNA samples from healthy individuals for rs12303764(G/T) by T-ARMS-PCR assay and sequencing were concordant. The newly developed assay was further applied to genotype samples from 100 healthy individuals of North Indian origin. Genotype frequencies were 9, 34 and 57 % for GG, GT and TT, respectively. Allele frequencies were 0.26 and 0.74 for G and T, respectively. The allele frequencies were in Hardy-Weinberg's equilibrium (p = 0.2443). T-ARMS-PCR assay developed in our laboratory for genotyping rs12303764 (G/T) of ULK1 gene is time saving and cost-effective as compared to the available methods. Furthermore, this is the first study reporting allelic and genotype frequencies of ULK1 rs12303764 (G/T) variants in North Indian population.

  6. GEMINI-TITAN (GT)-10 - EARTH SKY - RENDEZVOUS - OUTER SPACE

    NASA Image and Video Library

    1966-07-18

    S66-46122 (18 July 1966) --- Agena Target Docking Vehicle 5005 is photographed from the Gemini-Titan 10 (GT-10) spacecraft during rendezvous in space. The two spacecraft are about 38 feet apart. After docking with the Agena, astronauts John W. Young, command pilot, and Michael Collins, pilot, fired the 16,000 pound thrust engine of Agena X's primary propulsion system to boost the combined vehicles into an orbit with an apogee of 413 nautical miles to set a new altitude record for manned spaceflight. Photo credit: NASA

  7. RELAP5 posttest calculation of IAEA-SPE-4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petelin, S.; Mavko, B.; Parzer, I.

    The International Atomic Energy Agency`s Fourth Standard Problem Exercise (IAEA-SPE-4) was performed at the PMK-2 facility. The PMK-2 facility is designed to study processes following small- and medium-size breaks in the primary system and natural circulation in VVER-440 plants. The IAEA-SPE-4 experiment represents a cold-leg side small break, similar to the IAEA-SPE-2, with the exception of the high-pressure safety injection being unavailable, and the secondary side bleed and feed initiation. The break valve was located at the dead end of a vertical downcomer, which in fact simulates a break in the reactor vessel itself, and should be unlikely to happenmore » in a real nuclear power plant (NPP). Three different RELAP5 code versions were used for the transient simulation in order to assess the calculations with test results.« less

  8. Cleaning effectiveness and shaping ability of rotary ProTaper compared with rotary GT and manual K-Flexofile.

    PubMed

    Liu, Sheng-Bo; Fan, Bin; Cheung, Gary S P; Peng, Bing; Fan, Ming-Wen; Gutmann, James L; Song, Ya-Ling; Fu, Qiang; Bian, Zhuan

    2006-12-01

    To compare the cleaning efficacy and shaping ability of engine-driven ProTaper and GT files, and manual preparation using K-Flexofile instruments in curved root canals of extracted human teeth. 45 canals of maxillary and mandibular molars with curvatures between 25 degrees and 40 degrees were divided into three groups. The groups were balanced with regard to the angle and the radius of canal curvature. Canals in each group were prepared to an apical size of 25 with either the rotary ProTaper or GT system, or manually with K-Flexofile using the modified double-flared technique. Irrigation was done with 2 mL 2.5% NaOCl after each instrument and, as the final rinse, 10 mL 2.5% NaOCl then 10 mL 17% EDTA and finally 5 mL distilled water. The double-exposure radiographic technique was used to examine for the presence of apical transportation. The time required to complete the preparation, as well as any change in working length after preparation were recorded. The roots were then grooved and split longitudinally. The amounts of debris and smear layer were evaluated at the apical, middle and coronal regions under the scanning electron microscope. Data were analyzed either parametrically with the F-test or non-parametrically using the Kruskal-Wallis test, where appropriate. Two GT files but none of the K-Flexofile and ProTaper instruments separated. For debris removal, the ProTaper group achieved a better result than GT (P < 0.05) but not the K-Flexofile group at all three regions (apical, middle and coronal). K-Flexofiles produced significantly less smear layer than ProTaper and GT files only in the middle third of the canal (P < 0.01). Both NiTi rotary instruments maintained the original canal shape better than the K-Flexofiles (P < 0.05) and required significantly less time to complete the preparation.

  9. 10 CFR 150.17a - Compliance with requirements of US/IAEA Safeguards Agreement.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Compliance with requirements of US/IAEA Safeguards... Authority in Agreement States § 150.17a Compliance with requirements of US/IAEA Safeguards Agreement. (a... shall take other action as may be necessary to implement the US/IAEA Safeguards Agreement, as described...

  10. 10 CFR 150.17a - Compliance with requirements of US/IAEA Safeguards Agreement.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Compliance with requirements of US/IAEA Safeguards... Authority in Agreement States § 150.17a Compliance with requirements of US/IAEA Safeguards Agreement. (a... shall take other action as may be necessary to implement the US/IAEA Safeguards Agreement, as described...

  11. The Isprs Benchmark on Indoor Modelling

    NASA Astrophysics Data System (ADS)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: benchmark-on-indoor-modelling.html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  12. Developing Benchmarks for Solar Radio Bursts

    NASA Astrophysics Data System (ADS)

    Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Domm, P.; Love, J. J.; Pierson, J.

    2016-12-01

    Solar radio bursts can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan has asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The solar radio benchmark team was also asked to define the wavelength/frequency bands of interest. The benchmark team developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks and the basis used to derive them. We will also present the work that needs to be done in order to complete the final, or phase 2 benchmarks.

  13. Comparison of GT3X accelerometer and Yamax pedometer steps/day in a free-living sample of overweight and obese adults

    USDA-ARS?s Scientific Manuscript database

    The purpose of this study was to compare steps/day detected by the YAMAX SW-200 pedometer versus the Actigraph GT3X accelerometer in free-living adults. Daily YAMAX and GT3X steps were collected from a sample of 23 overweight and obese participants (78% female; age = 52.6 +/- 8.4 yr.; BMI = 31.0 +/-...

  14. WILLIAMS, CLIFTON C. ASTRONAUT - MISSION CONTROL CENTER (MCC) - GEMINI-TITAN (GT)-3 - MSC

    NASA Image and Video Library

    1965-03-23

    S65-18063 (23 March 1965) --- Astronaut Clifton C. Williams is shown at console in the Mission Control Center (MCC) in Houston, Texas during the Gemini-Titan 3 flight. The GT-3 flight was monitored by the MCC in Houston, but was controlled by the MCC at Cape Kennedy.

  15. Modeling of displacement damage in silicon carbide detectors resulting from neutron irradiation

    NASA Astrophysics Data System (ADS)

    Khorsandi, Behrooz

    There is considerable interest in developing a power monitor system for Generation IV reactors (for instance GT-MHR). A new type of semiconductor radiation detector is under development based on silicon carbide (SiC) technology for these reactors. SiC has been selected as the semiconductor material due to its superior thermal-electrical-neutronic properties. Compared to Si, SiC is a radiation hard material; however, like Si, the properties of SiC are changed by irradiation by a large fluence of energetic neutrons, as a consequence of displacement damage, and that irradiation decreases the life-time of detectors. Predictions of displacement damage and the concomitant radiation effects are important for deciding where the SiC detectors should be placed. The purpose of this dissertation is to develop computer simulation methods to estimate the number of various defects created in SiC detectors, because of neutron irradiation, and predict at what positions of a reactor, SiC detectors could monitor the neutron flux with high reliability. The simulation modeling includes several well-known---and commercial---codes (MCNP5, TRIM, MARLOWE and VASP), and two kinetic Monte Carlo codes written by the author (MCASIC and DCRSIC). My dissertation will highlight the displacement damage that may happen in SiC detectors located in available positions in the OSURR, GT-MHR and IRIS. As extra modeling output data, the count rates of SiC for the specified locations are calculated. A conclusion of this thesis is SiC detectors that are placed in the thermal neutron region of a graphite moderator-reflector reactor have a chance to survive at least one reactor refueling cycle, while their count rates are acceptably high.

  16. Benchmarking expert system tools

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  17. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  18. Identification of Small Molecules against Botulinum Neurotoxin B Binding to Neuronal Cells at Ganglioside GT1b Binding Site with Low to Moderate Affinity

    DTIC Science & Technology

    2014-10-01

    BoNT serotype B (BoNT/B) for the trisaccharide GT1b were identified from the x-ray crystal structure of the BoNT/B/trisaccharide (GT1b) complex ( PDB ...trisaccharide and all the water from the structure and identified four potential binding pockets (Pocket-1, Pocket-2, and Pocket-4) as shown in...four potential binding sites or pockets on BoNT serotype B (BoNT/B) for the trisaccharide GT1b were identified from the x-ray crystal structure of the

  19. Four Years of Practical Arrangements between IAEA and Moscow SIA 'Radon': Preliminary Results - 13061

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batyukhnova, O.G.; Karlina, O.K.; Neveikin, P.P.

    The International Education Training Centre (IETC) at Moscow State Unitary Enterprise Scientific and Industrial Association 'Radon' (SIA 'Radon'), in co-operation with the International Atomic Energy Agency (IAEA), has developed expertise and provided training to waste management personnel for the last 15 years. Since 1997, the educational system of the enterprise with the support of the IAEA has acquired an international character: more than 470 experts from 35 countries- IAEA Member States completed the professional development. Training is conducted at various thematic courses or fellowships for individual programs and seminars on IAEA technical projects. In June 2008 a direct agreement (Practicalmore » Arrangements) was signed between SIA 'Radon' and the IAEA on cooperation in the field of development of new technologies, expert's advice to IAEA Member States, and, in particular, the training of personnel in the field of radioactive waste management (RWM), which opens up new perspectives for fruitful cooperation of industry professionals. The paper summarizes the current experience of the SIA 'Radon' in the organization and implementation of the IAEA sponsored training and others events and outlines some of strategic educational elements, which IETC will continue to pursue in the coming years. (authors)« less

  20. DNMT3B -579 G>T Promoter Polymorphism and the Risk of Gastric Cancer in the West of Iran.

    PubMed

    Ahmadi, Kulsom; Soleimani, Azam; Irani, Shiva; Kiani, Aliasghar; Ghanadi, Kourosh; Noormohamadi, Zahra; Sakinejad, Foroozan

    2018-06-01

    Many studies have suggested that modulation of DNMT3B function caused by single nucleotide polymorphisms of the DNMT3B promoter region may underlie the susceptibility to various cancers such as tumors of the digestive system. The aim of this study was to investigate the effect of -579 G>T polymorphism in the promoter of the DNMT3B gene on risk of gastric cancer in a population from West Iran. We conducted a case-control study in 100 gastric cancer patients and 112 cancer-free controls to assess the correlation between DNMT3B -579 G>T (rs1569686) polymorphism and the risk of gastric cancer. Detection of genotypes of DNMT3B G39179T polymorphism was analyzed by PCR-RFLP. There was no significant difference in the distribution of DNMT3B -579 G>T genotypes between the cases and controls. However, in the stratified analysis by clinicopathological characteristic types, we found that statistically, the risk susceptibility to gastric cancer was significantly associated with tumor grade II and GT/TT genotype of patients, compared to patients having GG genotype, (OR = 5.4737, 95% CI = 1.4746. 20.3184, P = 0.01). Our study suggested that the -579 T allele may increase the relative risk for the progression of clinicopathological characteristic of tumor grade of gastric cancer patients.

  1. Strengthening radiopharmacy practice in IAEA Member States.

    PubMed

    Duatti, Adriano; Bhonsle, Uday

    2013-05-01

    Radiopharmaceuticals are essential components of nuclear medicine procedures. Without radiopharmaceuticals nuclear medicine procedures cannot be performed. Therefore it could be said that 'No radiopharmaceutical-no nuclear medicine.' A good radiopharmacy practice supports nuclear medicine activities by producing radiopharmaceuticals that are safe and are of the required quality in a consistent way. As with any medicinal product, radiopharmaceuticals are required to be produced under carefully controlled conditions and are tested for their quality, prior to the administration to patients, using validated standard operating procedures. These procedures are based on the principles of Good Manufacturing Practice (GMP). The GMP principles are based on scientific knowledge and applicable regulatory requirements and guidance related to radiopharmaceutical productions and use. The International Atomic Energy Agency (IAEA) is committed to promote, in the Member States (MS), a rational and practical approach for the implementation of GMP for compounding or manufacturing of diagnostic or therapeutic radiopharmaceuticals. To pursue this goal the IAEA has developed various mechanisms and collaborations with individual experts in the field and with relevant national and international institutions or organizations. IAEA's activities in promoting radiopharmaceutical science include commissioning expert advice in the form of publications on radiopharmaceutical production, quality control and usage, producing technical guidance on production and regulatory aspects related to new radiopharmaceuticals, creating guidance documentation for self or internal audits of radiopharmaceutical production facilities, producing guidance on implementation of Quality Management System and GMP in radiopharmacy, assisting in creation of specific radiopharmaceutical monographs for the International Pharmacopoeia, and developing radiopharmacy-related human resource capabilities in MS through individual

  2. GT-094, a NO-NSAID, inhibits colon cancer cell growth by activation of a reactive oxygen species-microRNA-27a: ZBTB10-specificity protein pathway.

    PubMed

    Pathi, Satya S; Jutooru, Indira; Chadalapaka, Gayathri; Sreevalsan, Sandeep; Anand, S; Thatcher, Gregory Rj; Safe, Stephen

    2011-02-01

    Ethyl 2-((2,3-bis(nitrooxy)propyl)disulfanyl)benzoate (GT-094) is a novel nitric oxide (NO) chimera containing an nonsteroidal anti-inflammatory drug (NSAID) and NO moieties and also a disulfide pharmacophore that in itself exhibits cancer chemopreventive activity. In this study, the effects and mechanism of action of GT-094 were investigated in RKO and SW480 colon cancer cells. GT-094 inhibited cell proliferation and induced apoptosis in both cell lines and this was accompanied by decreased mitochondrial membrane potential (MMP) and induction of reactive oxygen species (ROS), and these responses were reversed after cotreatment with the antioxidant glutathione. GT-094 also downregulated genes associated with cell growth [cyclin D1, hepatocyte growth factor receptor (c-Met), epidermal growth factor receptor (EGFR)], survival (bcl-2, survivin), and angiogenesis [VEGF and its receptors (VEGFR1 and VEGFR2)]. Results of previous RNA interference studies in this laboratory has shown that these genes are regulated, in part, by specificity protein (Sp) transcription factors Sp1, Sp3, and Sp4 that are overexpressed in colon and other cancer cell lines and not surprisingly, GT-094 also decreased Sp1, Sp3, and Sp4 in colon cancer cells. GT-094-mediated repression of Sp and Sp-regulated gene products was due to downregulation of microRNA-27a (miR-27a) and induction of ZBTB10, an Sp repressor that is regulated by miR-27a in colon cancer cells. Moreover, the effects of GT-094 on Sp1, Sp3, Sp4, miR-27a, and ZBTB10 were also inhibited by glutathione suggesting that the anticancer activity of GT-094 in colon cancer cells is due, in part, to activation of an ROS-miR-27a:ZBTB10-Sp transcription factor pathway.

  3. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  4. Neutron Reference Benchmark Field Specification: ACRR 44 Inch Lead-Boron (LB44) Bucket Environment (ACRR-LB44-CC-32-CL).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the 44 inch Lead-Boron (LB44) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results ofmore » 31 integral dosimetry measurements in the neutron field are reported.« less

  5. Fault displacement hazard assessment for nuclear installations based on IAEA safety standards

    NASA Astrophysics Data System (ADS)

    Fukushima, Y.

    2016-12-01

    In the IAEA Safety NS-R-3, surface fault displacement hazard assessment (FDHA) is required for the siting of nuclear installations. If any capable faults exist in the candidate site, IAEA recommends the consideration of alternative sites. However, due to the progress in palaeoseismological investigations, capable faults may be found in existing site. In such a case, IAEA recommends to evaluate the safety using probabilistic FDHA (PFDHA), which is an empirical approach based on still quite limited database. Therefore a basic and crucial improvement is to increase the database. In 2015, IAEA produced a TecDoc-1767 on Palaeoseismology as a reference for the identification of capable faults. Another IAEA Safety Report 85 on ground motion simulation based on fault rupture modelling provides an annex introducing recent PFDHAs and fault displacement simulation methodologies. The IAEA expanded the project of FDHA for the probabilistic approach and the physics based fault rupture modelling. The first approach needs a refinement of the empirical methods by building a world wide database, and the second approach needs to shift from kinematic to the dynamic scheme. Both approaches can complement each other, since simulated displacement can fill the gap of a sparse database and geological observations can be useful to calibrate the simulations. The IAEA already supported a workshop in October 2015 to discuss the existing databases with the aim of creating a common worldwide database. A consensus of a unified database was reached. The next milestone is to fill the database with as many fault rupture data sets as possible. Another IAEA work group had a WS in November 2015 to discuss the state-of-the-art PFDHA as well as simulation methodologies. Two groups jointed a consultancy meeting in February 2016, shared information, identified issues, discussed goals and outputs, and scheduled future meetings. Now we may aim at coordinating activities for the whole FDHA tasks jointly.

  6. Reduction of Hexavalent Chromium by Green Tea Polyphenols and Green Tea Nano Zero-Valent Iron (GT-nZVI).

    PubMed

    Chrysochoou, M; Reeves, K

    2017-03-01

    This study reports on the direct reduction of hexavalent chromium [Cr(VI)] by green tea polyphenols, including a green tea solution and pure epigallocatechin gallate (EGCG) solution. A linear trend was observed between the amount of reduced Cr(VI) and the amount of added polyphenols. The green tea solution showed a continued decrease in the observed stoichiometry with increasing pH, from a maximum of 1.4 mol per gallic acid equivalent (GAE) of green tea at pH 2.5, to 0.2 mol/GAE at pH 8.8. The EGCG solution exhibited different behavior, with a maximum stoichiometry of 2 at pH 7 and minimum of 1.6 at pH 4.4 and 8.9. When green tea was used to first react with Fe 3+ and form GT-nZVI, the amount of Cr(VI) reduced by a certain volume of GT-nZVI was double compared to green tea, and 6 times as high considering that GT-nZVI only contains 33 % green tea.

  7. Making Benchmark Testing Work

    ERIC Educational Resources Information Center

    Herman, Joan L.; Baker, Eva L.

    2005-01-01

    Many schools are moving to develop benchmark tests to monitor their students' progress toward state standards throughout the academic year. Benchmark tests can provide the ongoing information that schools need to guide instructional programs and to address student learning problems. The authors discuss six criteria that educators can use to…

  8. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  9. NAS Grid Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.

  10. Overview of Hole GT3A: The sheeted dike/gabbro transition

    NASA Astrophysics Data System (ADS)

    Abe, N.; Harris, M.; Michibayashi, K.; de Obeso, J. C.; Kelemen, P. B.; Takazawa, E.; Teagle, D. A. H.; Coggon, J. A.; Matter, J. M.; Phase I Science Party, T. O. D. P.

    2017-12-01

    Hole GT3A (23.11409 N, 58.21172 E) was drilled by the Oman Drilling Project (OmDP) into Wadi Abdah of the Samail ophiolite, Oman. OmDP is an international collaboration supported by the International Continental Scientifi1c Drilling Program, the Deep Carbon Observatory, NSF, IODP, JAMSTEC, and the European, Japanese, German and Swiss Science Foundations, with in-kind support in Oman from the Ministry of Regional Municipalities and Water Resources, Public Authority of Mining, Sultan Qaboos University, and the German University of Technology. Hole GT3A was diamond cored in February to March 2017 to a total depth of 400 m. The outer surfaces of the cores were imaged and described on site before being curated, boxed and shipped to the IODP drill ship Chikyu, where they underwent comprehensive visual and instrumental analysis. Hole GT3A recovered predominantly sheeted dikes and gabbros and has been sub-divided into 4 igneous groups based on the abundance of gabbro downhole. Group 1 (Upper Sheeted Dike Sequence) occurs from 0 to 111.02 m, group II (Upper Gabbro Sequence) is from 111.02 to 127.89 m, group III (Lower Sheeted Dike Sequence) is between 127.89 to 233.84 m and group IV (Lower Gabbro Sequence) is from 233.84 to 400 m. Group II and IV are both associated with almost equal proportions of dikes to gabbroic lithologies, whereas group I & III have >95% dikes. The sheeted dikes were logged as either basalt (46.9 %) or diabase (26.2 %) depending on the predominant grain size of the dike. Gabbroic lithologies include (most to least abundant) gabbro, oxide gabbro and olivine gabbro. Other lithologies present include diorite (7.5%) and tonalite and trondhjemite (1%). Tonalite and trondhjemite are present as cm-sized dikelets and are found within group II and IV. Gabbroic lithologies generally display a varitextured appearance and are characterised by the co-existence of poikilitic and granular domains. Detailed observations of chilled margins and igneous contacts reveal

  11. Development of teaching material to integrate GT-POWER into combustion courses for IC engine simulations.

    DOT National Transportation Integrated Search

    2009-02-01

    The main objective of this project was to develop instructional engineering projects that utilize the newly-offered PACE software GT-POWER for engine simulations in combustion-related courses at the Missouri University of Science and Technology. Stud...

  12. Turbulent transport measurements in a model of GT-combustor

    NASA Astrophysics Data System (ADS)

    Chikishev, L. M.; Gobyzov, O. A.; Sharaborin, D. K.; Lobasov, A. S.; Dulin, V. M.; Markovich, D. M.; Tsatiashvili, V. V.

    2016-10-01

    To reduce NOx formation modern industrial power gas-turbines utilizes lean premixed combustion of natural gas. The uniform distribution of local fuel/air ratio in the combustion chamber plays one of the key roles in the field of lean combustion to prevent thermo-acoustic pulsations. Present paper reports on simultaneous Particle Image Velocimetry and acetone Planar Laser Induced Fluorescence measurements in a cold model of GT-combustor to investigate mixing processes which are relevant to the organization of lean premixed combustion. Velocity and passive admixture pulsations correlations were measured to verify gradient closer model, which is often used in Reynolds-Averaged Navier-Stokes (RANS) simulation of turbulent mixing.

  13. Strengthening IAEA Safeguards for Research Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reid, Bruce D.; Anzelon, George A.; Budlong-Sylvester, Kory

    During their December 10-11, 2013, workshop in Grenoble France, which focused on the history and future of safeguarding research reactors, the United States, France and the United Kingdom (UK) agreed to conduct a joint study exploring ways to strengthen the IAEA’s safeguards approach for declared research reactors. This decision was prompted by concerns about: 1) historical cases of non-compliance involving misuse (including the use of non-nuclear materials for production of neutron generators for weapons) and diversion that were discovered, in many cases, long after the violations took place and as part of broader pattern of undeclared activities in half amore » dozen countries; 2) the fact that, under the Safeguards Criteria, the IAEA inspects some reactors (e.g., those with power levels under 25 MWt) less than once per year; 3) the long-standing precedent of States using heavy water research reactors (HWRR) to produce plutonium for weapons programs; 4) the use of HEU fuel in some research reactors; and 5) various technical characteristics common to some types of research reactors that could provide an opportunity for potential proliferators to misuse the facility or divert material with low probability of detection by the IAEA. In some research reactors it is difficult to detect diversion or undeclared irradiation. In addition, infrastructure associated with research reactors could pose a safeguards challenge. To strengthen the effectiveness of safeguards at the State level, this paper advocates that the IAEA consider ways to focus additional attention and broaden its safeguards toolbox for research reactors. This increase in focus on the research reactors could begin with the recognition that the research reactor (of any size) could be a common path element on a large number of technically plausible pathways that must be considered when performing acquisition pathway analysis (APA) for developing a State Level Approach (SLA) and Annual Implementation Plan

  14. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    ERIC Educational Resources Information Center

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  15. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  16. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    ERIC Educational Resources Information Center

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  17. GT-SUPREEM: the Georgia Tech summer undergraduate packaging research and engineering experience for minorities

    NASA Astrophysics Data System (ADS)

    May, Gary S.

    1996-07-01

    The Georgia Tech SUmmer Undergraduate Packaging Research and Engineering Experience for Minorities (GT-SUPREEM) is an eight-week summer program designed to attract qualified minority students to pursue graduate degrees in packaging- related disciplines. The program is conducted under the auspices of the Georgia Tech Engineering Research Center in Low-Cost Electronic Packaging, which is sponsored by the National Science Foundation. In this program, nine junior and senior level undergraduate students are selected on a nationwide basis and paired with a faculty advisor to undertake research projects in the Packaging Research CEnter. The students are housed on campus and provided with a $DLR3,000 stipend and a travel allowance. At the conclusion of the program, the students present both oral and written project summaries. It is anticipated that this experience will motivate these students to become applicants for graduate study in ensuring years. This paper will provide an overview of the GT-SUPREEM program, including student research activities, success stories, lessons learned, and overall program outlook.

  18. Translational benchmark risk analysis

    PubMed Central

    Piegorsch, Walter W.

    2010-01-01

    Translational development – in the sense of translating a mature methodology from one area of application to another, evolving area – is discussed for the use of benchmark doses in quantitative risk assessment. Illustrations are presented with traditional applications of the benchmark paradigm in biology and toxicology, and also with risk endpoints that differ from traditional toxicological archetypes. It is seen that the benchmark approach can apply to a diverse spectrum of risk management settings. This suggests a promising future for this important risk-analytic tool. Extensions of the method to a wider variety of applications represent a significant opportunity for enhancing environmental, biomedical, industrial, and socio-economic risk assessments. PMID:20953283

  19. The effect of S-substitution at the O6-guanine site on the structure and dynamics of a DNA oligomer containing a G:T mismatch

    PubMed Central

    2017-01-01

    The effect of S-substitution on the O6 guanine site of a 13-mer DNA duplex containing a G:T mismatch is studied using molecular dynamics. The structure, dynamic evolution and hydration of the S-substituted duplex are compared with those of a normal duplex, a duplex with S-substitution on guanine, but no mismatch and a duplex with just a G:T mismatch. The S-substituted mismatch leads to cell death rather than repair. One suggestion is that the G:T mismatch recognition protein recognises the S-substituted mismatch (GS:T) as G:T. This leads to a cycle of futile repair ending in DNA breakage and cell death. We find that some structural features of the helix are similar for the duplex with the G:T mismatch and that with the S-substituted mismatch, but differ from the normal duplex, notably the helical twist. These differences arise from the change in the hydrogen-bonding pattern of the base pair. However a marked feature of the S-substituted G:T mismatch duplex is a very large opening. This showed considerable variability. It is suggested that this enlarged opening would lend support to an alternative model of cell death in which the mismatch protein attaches to thioguanine and activates downstream damage-response pathways. Attack on the sulphur by reactive oxygen species, also leading to cell death, would also be aided by the large, variable opening. PMID:28910418

  20. Training activities at FSUE 'RADON' and Lomonosov's Moscow state university under practical arrangements with IAEA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batyukhnova, O.G.; Karlina, O.K.; Neveykin, P.P.

    The International Education Training Centre (IETC) at Moscow Federal State Unitary Enterprise (FSUE) 'Radon', in co-operation with the International Atomic Energy Agency (IAEA), has developed expertise and provided training to waste management personnel for the last 15 years. Since 1997, the educational system of the enterprise with the support of the IAEA has acquired an international character: more than 470 experts from 35 countries - IAEA Member States completed the professional development. Training is conducted at various thematic courses or fellowships for individual programs and seminars on IAEA technical projects. In June 2008 a direct agreement (Practical Arrangements) has beenmore » signed between FSUE 'Radon' and the IAEA on cooperation in the field of development of new technologies, expert's advice to IAEA Member States, and, in particular, the training of personnel in the field of radioactive waste management (RWM), which opens up new perspectives for fruitful cooperation of industry professionals. A similar agreement - Practical Arrangements - has been signed between Lomonosov's MSU and the IAEA in 2012. In October 2012 a new IAEA two-weeks training course started at Lomonosov's MSU and FSUE 'Radon' in the framework of the Practical Agreements signed. Pre-disposal management of waste was the main topic of the courses. The paper summarizes the current experience of the FSUE 'Radon' in the organization and implementation of the IAEA sponsored training and others events and outlines some of strategic educational elements, which IETC will continue to pursue in the coming years. (authors)« less

  1. Medical school benchmarking - from tools to programmes.

    PubMed

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  2. Thermo-economic comparative analysis of gas turbine GT10 integrated with air and steam bottoming cycle

    NASA Astrophysics Data System (ADS)

    Czaja, Daniel; Chmielnak, Tadeusz; Lepszy, Sebastian

    2014-12-01

    A thermodynamic and economic analysis of a GT10 gas turbine integrated with the air bottoming cycle is presented. The results are compared to commercially available combined cycle power plants based on the same gas turbine. The systems under analysis have a better chance of competing with steam bottoming cycle configurations in a small range of the power output capacity. The aim of the calculations is to determine the final cost of electricity generated by the gas turbine air bottoming cycle based on a 25 MW GT10 gas turbine with the exhaust gas mass flow rate of about 80 kg/s. The article shows the results of thermodynamic optimization of the selection of the technological structure of gas turbine air bottoming cycle and of a comparative economic analysis. Quantities are determined that have a decisive impact on the considered units profitability and competitiveness compared to the popular technology based on the steam bottoming cycle. The ultimate quantity that can be compared in the calculations is the cost of 1 MWh of electricity. It should be noted that the systems analyzed herein are power plants where electricity is the only generated product. The performed calculations do not take account of any other (potential) revenues from the sale of energy origin certificates. Keywords: Gas turbine air bottoming cycle, Air bottoming cycle, Gas turbine, GT10

  3. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  4. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    EPA Pesticide Factsheets

    The purpose of this document is to provide guidance for the Agency on the application of the benchmark dose approach in determining the point of departure (POD) for health effects data, whether a linear or nonlinear low dose extrapolation is used. The guidance includes discussion on computation of benchmark doses and benchmark concentrations (BMDs and BMCs) and their lower confidence limits, data requirements, dose-response analysis, and reporting requirements. This guidance is based on today's knowledge and understanding, and on experience gained in using this approach.

  5. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  6. Benchmarking clinical photography services in the NHS.

    PubMed

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  7. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  8. Stress perception and (GT)n repeat polymorphism in haem oxygenase 1 promoter are both risk factors in development of eating disorders.

    PubMed

    Slachtová, L; Kaminská, D; Chvál, M; Králík, L; Martásek, P; Papežová, H

    2013-01-01

    Haem oxygenase 1 (HO-1) plays a pivotal role in metabolic stress protecting cells in dependence on reactive oxygen species. This study investigated a potential gene environment interaction between the (GT)n repeat HO1 polymorphism and the stress perception in patients with eating disorder and in controls. Stress perception and (GT)n polymorphism were measured in 127 patients with eating disorders and in 78 healthy controls using Stress and Coping Inventory and genotyping. Based on the inventory, overall, specific and weighted stress scores were defined. Clinical stress score was generated according to the patient's history and interviews. According to our hypothesis, 1) all stress scores describing subjective stress perception were significantly higher in patients compared to controls (P ≤ 0.001; P ≤ 0.002; P ≤ 0.001), 2) the L/L genotype of GT promoter repeats (L < 25 GT repeats, S < 25 GT repeats) in the patients was associated with higher overall (P ≤ 0.001), specific (P ≤ 0.010) and weighted stress score (P ≤ 0.005) compared to the L/S variant, and 3) Pearson's correlation of clinical versus objective stress scores showed not very tight relationship (0.198; 0.287; 0.224, respectively). We assume potential risk of the L allele of HO1 promoter polymorphism for the stress response and contribution of the subjective stress perception together with the L/L genotype to the development of eating disorder. Decreased HO1 expression in the presence of L/L genotype plus more intensive stress perception in the patients can lead to secondary stress, with increasing severity of the symptoms and aggravation of the disease.

  9. Integration of Ganglioside GT1b Receptor into DPPE and DPPC Phospholipid Monolayers: An X-Ray Reflectivity and Grazing-Incidence Diffraction Study

    PubMed Central

    Miller, C. E.; Busath, D. D.; Strongin, B.; Majewski, J.

    2008-01-01

    Using synchrotron grazing-incidence x-ray diffraction (GIXD) and reflectivity, the in-plane and out-of-plane structures of mixed-ganglioside GT1b-phospholipid monolayers were investigated at the air-liquid interface and compared with monolayers of the pure components. The receptor GT1b is involved in the binding of lectins and toxins, including botulinum neurotoxin, to cell membranes. Monolayers composed of 20 mol % ganglioside GT1b, the phospholipid dipalmitoyl phosphatidylethanolamine (DPPE), and the phospholipid dipalmitoyl phosphatidylcholine (DPPC) were studied in the gel phase at 23°C and at surface pressures of 20 and 40 mN/m, and at pH 7.4 and 5. Under these conditions, the two components did not phase-separate, and no evidence of domain formation was observed. The x-ray scattering measurements revealed that GT1b was intercalated within the host DPPE/DPPC monolayers, and slightly expanded DPPE but condensed the DPPC matrix. The oligosaccharide headgroups extended normally from the monolayer surfaces into the subphase. This study demonstrated that these monolayers can serve as platforms for investigating toxin membrane binding and penetration. PMID:18599631

  10. Benchmarking initiatives in the water industry.

    PubMed

    Parena, R; Smeets, E

    2001-01-01

    Customer satisfaction and service care are every day pushing professionals in the water industry to seek to improve their performance, lowering costs and increasing the provided service level. Process Benchmarking is generally recognised as a systematic mechanism of comparing one's own utility with other utilities or businesses with the intent of self-improvement by adopting structures or methods used elsewhere. The IWA Task Force on Benchmarking, operating inside the Statistics and Economics Committee, has been committed to developing a general accepted concept of Process Benchmarking to support water decision-makers in addressing issues of efficiency. In a first step the Task Force disseminated among the Committee members a questionnaire focused on providing suggestions about the kind, the evolution degree and the main concepts of Benchmarking adopted in the represented Countries. A comparison among the guidelines adopted in The Netherlands and Scandinavia has recently challenged the Task Force in drafting a methodology for a worldwide process benchmarking in water industry. The paper provides a framework of the most interesting benchmarking experiences in the water sector and describes in detail both the final results of the survey and the methodology focused on identification of possible improvement areas.

  11. Benchmarking--Measuring and Comparing for Continuous Improvement.

    ERIC Educational Resources Information Center

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  12. Antiviral Activity and Resistance Analysis of NS3/4A Protease Inhibitor Grazoprevir and NS5A Inhibitor Elbasvir in Hepatitis C Virus GT4 Replicons.

    PubMed

    Asante-Appiah, Ernest; Curry, Stephanie; McMonagle, Patricia; Ingravallo, Paul; Chase, Robert; Nickle, David; Qiu, Ping; Howe, Anita; Lahser, Frederick C

    2017-07-01

    Although genotype 4 (GT4)-infected patients represent a minor overall percentage of the global hepatitis C virus (HCV)-infected population, the high prevalence of the genotype in specific geographic regions coupled with substantial sequence diversity makes it an important genotype to study for antiviral drug discovery and development. We evaluated two direct-acting antiviral agents-grazoprevir, an HCV NS3/4A protease inhibitor, and elbasvir, an HCV NS5A inhibitor-in GT4 replicons prior to clinical studies in this genotype. Following a bioinformatics analysis of available GT4 sequences, a set of replicons bearing representative GT4 clinical isolates was generated. For grazoprevir, the 50% effective concentration (EC 50 ) against the replicon bearing the reference GT4a (ED43) NS3 protease and NS4A was 0.7 nM. The median EC 50 for grazoprevir against chimeric replicons encoding NS3/4A sequences from GT4 clinical isolates was 0.2 nM (range, 0.11 to 0.33 nM; n = 5). The difficulty in establishing replicons bearing NS3/4A resistance-associated substitutions was substantially overcome with the identification of a G162R adaptive substitution in NS3. Single NS3 substitutions D168A/V identified from de novo resistance selection studies reduced grazoprevir antiviral activity by 137- and 47-fold, respectively, in the background of the G162R replicon. For elbasvir, the EC 50 against the replicon bearing the reference full-length GT4a (ED43) NS5A gene was 0.0002 nM. The median EC 50 for elbasvir against chimeric replicons bearing clinical isolates from GT4 was 0.0007 nM (range, 0.0002 to 34 nM; n = 14). De novo resistance selection studies in GT4 demonstrated a high propensity to suppress the emergence of amino acid substitutions that confer high-potency reductions to elbasvir. Phenotypic characterization of the NS5A amino acid substitutions identified (L30F, L30S, M31V, and Y93H) indicated that they conferred 15-, 4-, 2.5-, and 7.5-fold potency losses, respectively, to elbasvir

  13. EPA's Benchmark Dose Modeling Software

    EPA Science Inventory

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  14. Crystal Structure of Botulinum Neurotoxin Type a in Complex With the Cell Surface Co-Receptor GT1b-Insight Into the Toxin-Neuron Interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stenmark, P.; Dupuy, J.; Inamura, A.

    2009-05-26

    Botulinum neurotoxins have a very high affinity and specificity for their target cells requiring two different co-receptors located on the neuronal cell surface. Different toxin serotypes have different protein receptors; yet, most share a common ganglioside co-receptor, GT1b. We determined the crystal structure of the botulinum neurotoxin serotype A binding domain (residues 873-1297) alone and in complex with a GT1b analog at 1.7 A and 1.6 A, respectively. The ganglioside GT1b forms several key hydrogen bonds to conserved residues and binds in a shallow groove lined by Tryptophan 1266. GT1b binding does not induce any large structural changes in themore » toxin; therefore, it is unlikely that allosteric effects play a major role in the dual receptor recognition. Together with the previously published structures of botulinum neurotoxin serotype B in complex with its protein co-receptor, we can now generate a detailed model of botulinum neurotoxin's interaction with the neuronal cell surface. The two branches of the GT1b polysaccharide, together with the protein receptor site, impose strict geometric constraints on the mode of interaction with the membrane surface and strongly support a model where one end of the 100 A long translocation domain helix bundle swing into contact with the membrane, initiating the membrane anchoring event.« less

  15. Crystal structure of botulinum neurotoxin type A in complex with the cell surface co-receptor GT1b-insight into the toxin-neuron interaction.

    PubMed

    Stenmark, Pål; Dupuy, Jérôme; Imamura, Akihiro; Kiso, Makoto; Stevens, Raymond C

    2008-08-15

    Botulinum neurotoxins have a very high affinity and specificity for their target cells requiring two different co-receptors located on the neuronal cell surface. Different toxin serotypes have different protein receptors; yet, most share a common ganglioside co-receptor, GT1b. We determined the crystal structure of the botulinum neurotoxin serotype A binding domain (residues 873-1297) alone and in complex with a GT1b analog at 1.7 A and 1.6 A, respectively. The ganglioside GT1b forms several key hydrogen bonds to conserved residues and binds in a shallow groove lined by Tryptophan 1266. GT1b binding does not induce any large structural changes in the toxin; therefore, it is unlikely that allosteric effects play a major role in the dual receptor recognition. Together with the previously published structures of botulinum neurotoxin serotype B in complex with its protein co-receptor, we can now generate a detailed model of botulinum neurotoxin's interaction with the neuronal cell surface. The two branches of the GT1b polysaccharide, together with the protein receptor site, impose strict geometric constraints on the mode of interaction with the membrane surface and strongly support a model where one end of the 100 A long translocation domain helix bundle swing into contact with the membrane, initiating the membrane anchoring event.

  16. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...

  17. Chloride conducting light activated channel GtACR2 can produce both cessation of firing and generation of action potentials in cortical neurons in response to light.

    PubMed

    Malyshev, A Y; Roshchin, M V; Smirnova, G R; Dolgikh, D A; Balaban, P M; Ostrovsky, M A

    2017-02-15

    Optogenetics is a powerful technique in neuroscience that provided a great success in studying the brain functions during the last decade. Progress of optogenetics crucially depends on development of new molecular tools. Light-activated cation-conducting channelrhodopsin2 was widely used for excitation of cells since the emergence of optogenetics. In 2015 a family of natural light activated chloride channels GtACR was identified which appeared to be a very promising tool for using in optogenetics experiments as a cell silencer. Here we examined properties of GtACR2 channel expressed in the rat layer 2/3 pyramidal neurons by means of in utero electroporation. We have found that despite strong inhibition the light stimulation of GtACR2-positive neurons can surprisingly lead to generation of action potentials, presumably initiated in the axonal terminals. Thus, when using the GtACR2 in optogenetics experiments, its ability to induce action potentials should be taken into account. Our results also open an interesting possibility of using the GtACR2 both as cell silencer and cell activator in the same experiment varying the pattern of light stimulation. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. ICSBEP Benchmarks For Nuclear Data Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briggs, J. Blair

    2005-05-24

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) -- Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive andmore » internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled ''International Handbook of Evaluated Criticality Safety Benchmark Experiments.'' The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.« less

  19. IAEA programs in empowering the nuclear medicine profession through online educational resources.

    PubMed

    Pascual, Thomas Nb; Dondi, Maurizio; Paez, Diana; Kashyap, Ravi; Nunez-Miller, Rodolfo

    2013-05-01

    The International Atomic Energy Agency's (IAEA) programme in human health aims to enhance the capabilities in Member States to address needs related to the prevention, diagnosis, and treatment of diseases through the application of nuclear techniques. It has the specific mission of fostering the application of nuclear medicine techniques as part of the clinical management of certain types of diseases. Attuned to the continuous evolution of this specialty as well as to the advancement and diversity of methods in delivering capacity building efforts in this digital age, the section of nuclear medicine of the IAEA has enhanced its program by incorporating online educational resources for nuclear medicine professionals into its repertoire of projects to further its commitment in addressing the needs of its Member States in the field of nuclear medicine. Through online educational resources such as the Human Health Campus website, e-learning modules, and scheduled interactive webinars, a validation of the commitment by the IAEA in addressing the needs of its Member States in the field of nuclear medicine is strengthened while utilizing the advanced internet and communications technology which is progressively becoming available worldwide. The Human Health Campus (www.humanhealth.iaea.org) is the online educational resources initiative of the Division of Human Health of the IAEA geared toward enhancing professional knowledge of health professionals in radiation medicine (nuclear medicine and diagnostic imaging, radiation oncology, and medical radiation physics), and nutrition. E-learning modules provide an interactive learning environment to its users while providing immediate feedback for each task accomplished. Webinars, unlike webcasts, offer the opportunity of enhanced interaction with the learners facilitated through slide shows where the presenter guides and engages the audience using video and live streaming. This paper explores the IAEA's available online

  20. Subunit profiling and functional characteristics of acetylcholine receptors in GT1-7 cells.

    PubMed

    Arai, Yuki; Ishii, Hirotaka; Kobayashi, Makito; Ozawa, Hitoshi

    2017-03-01

    GnRH neurons form a final common pathway for the central regulation of reproduction. Although the involvement of acetylcholine in GnRH secretion has been reported, direct effects of acetylcholine and expression profiles of acetylcholine receptors (AChRs) still remain to be studied. Using immortalized GnRH neurons (GT1-7 cells), we analyzed molecular expression and functionality of AChRs. Expression of the mRNAs were identified in the order α7 > β2 = β1 ≧ α4 ≧ α5 = β4 = δ > α3 for nicotinic acetylcholine receptor (nAChR) subunits and m4 > m2 for muscarinic acetylcholine receptor (mAChR) subtypes. Furthermore, this study revealed that α7 nAChRs contributed to Ca 2+ influx and GnRH release and that m2 and m4 mAChRs inhibited forskolin-induced cAMP production and isobutylmethylxanthine-induced GnRH secretion. These findings demonstrate the molecular profiles of AChRs, which directly contribute to GnRH secretion in GT1-7 cells, and provide one possible regulatory action of acetylcholine in GnRH neurons.

  1. Professor Glyn O. Phillip's legacy within the IAEA programme on radiation and tissue banking.

    PubMed

    Morales Pedraza, Jorge

    2017-08-19

    Professor Phillips began his involvement in the implementation of this important IAEA programme, insisting that there were advantages to be gained by using the ionizing radiation technique to sterilize human and animal tissues, based on the IAEA experience gained in the sterilization of medical products. The outcome of the implementation of the IAEA programme on radiation and tissue banking demonstrated that Professor Phillips was right in his opinion.

  2. Benchmarking gate-based quantum computers

    NASA Astrophysics Data System (ADS)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  3. Reducing the risks from radon indoors: an IAEA perspective.

    PubMed

    Boal, T; Colgan, P A

    2014-07-01

    The IAEA has a mandate to develop, in collaboration with other relevant international organisations, 'standards of safety for protection of health and minimisation of danger to life and property', and to provide for the application of these standards. The most recent edition of the International Basic Safety Standards includes, for the first time, requirements to protect the public from exposure due to radon indoors. As a result, the IAEA has already developed guidance material in line with accepted best international practice and an international programme to assist its Member States in identifying and addressing high radon concentrations in buildings is being prepared. This paper overviews the current situation around the world and summarises the management approach advocated by the IAEA. A number of important scientific and policy issues are identified and discussed from the point-of-view of how they may impact on national action plans and strategies. Finally, the assistance and support available through the Agency is described. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Assessment of Alternative Funding Mechanisms for the IAEA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toomey, Christopher; Wyse, Evan T.; Kurzrok, Andrew J.

    While the International Atomic Energy Agency (IAEA) has enjoyed substantial success and prestige in the international community, there is growing concern that global demographic trends, advances in technology and the trend towards austerity in Member State budgets will stretch the Agency’s resources to a point where it may no longer be possible to execute its multifaceted mission in its entirety. As part of an ongoing effort by the Next Generation Safeguards Initiative to evaluate the IAEA’s long-term budgetary concerns , this paper proposes a series of alternate funding mechanisms that have the potential to sustain the IAEA in the long-term,more » including endowment, charity, and fee-for-service funding models.« less

  5. Tumor Necrosis Factor (TNF) –308G>A, Nitric Oxide Synthase 3 (NOS3) +894G>T Polymorphisms and Migraine Risk: A Meta-Analysis

    PubMed Central

    Chen, Min; Tang, Wenjing; Hou, Lei; Liu, Ruozhuo; Dong, Zhao; Han, Xun; Zhang, Xiaofei; Wan, Dongjun; Yu, Shengyuan

    2015-01-01

    Background and Objective Conflicting data have been reported on the association between tumor necrosis factor (TNF) –308G>A and nitric oxide synthase 3 (NOS3) +894G>T polymorphisms and migraine. We performed a meta-analysis of case-control studies to evaluate whether the TNF –308G>A and NOS3 +894G>T polymorphisms confer genetic susceptibility to migraine. Method We performed an updated meta-analysis for TNF –308G>A and a meta-analysis for NOS3 +894G>T based on studies published up to July 2014. We calculated study specific odds ratios (OR) and 95% confidence intervals (95% CI) assuming allele contrast, dominant model, recessive model, and co-dominant model as pooled effect estimates. Results Eleven studies in 6682 migraineurs and 22591 controls for TNF –308G>A and six studies in 1055 migraineurs and 877 controls for NOS3 +894G>T were included in the analysis. Neither indicated overall associations between gene polymorphisms and migraine risk. Subgroup analyses suggested that the “A” allele of the TNF –308G>A variant increases the risk of migraine among non-Caucasians (dominant model: pooled OR = 1.82; 95% CI 1.15 – 2.87). The risk of migraine with aura (MA) was increased among both Caucasians and non-Caucasians. Subgroup analyses suggested that the “T” allele of the NOS3 +894G>T variant increases the risk of migraine among non-Caucasians (co-dominant model: pooled OR = 2.10; 95% CI 1.14 – 3.88). Conclusions Our findings appear to support the hypothesis that the TNF –308G>A polymorphism may act as a genetic susceptibility factor for migraine among non-Caucasians and that the NOS3 +894G>T polymorphism may modulate the risk of migraine among non-Caucasians. PMID:26098763

  6. Comparison of GT3X accelerometer and YAMAX pedometer steps/day in a free-living sample of overweight and obese adults.

    PubMed

    Barriera, Tiago V; Tudor-Locke, Catrine; Champagne, Catherine M; Broyles, Stephanie T; Johnson, William D; Katzmarzyk, Peter T

    2013-02-01

    The purpose of this study was to compare steps/day detected by the YAMAX SW-200 pedometer versus the Actigraph GT3X accelerometer in free-living adults. Daily YAMAX and GT3X steps were collected from a sample of 23 overweight and obese participants (78% female; age = 52.6 ± 8.4 yr.; BMI = 31.0 ± 3.7 m·kg-2). Because a pedometer is more likely to be used in a community-based intervention program, it was used as the standard for comparison. Percent difference (PD) and absolute percent difference (APD) were calculated to examine between-instrument agreement. In addition, days were categorized based on PD: a) under-counting (> -10 PD), b) acceptable counting (-10 to 10 PD), and c) over-counting (> 10 PD). The YAMAX and GT3X detected 8,025 ± 3,967 and 7131 ± 3066 steps/day, respectively, and the outputs were highly correlated (r = .87). Average PD was -3.1% ± 30.7% and average APD was 23.9% ± 19.4%. Relative to the YAMAX, 53% of the days detected by the GT3X were classified as under-counting, 25% acceptable counting, and 23% over-counting. Although the output of these 2 instruments is highly correlated, caution is advised when directly comparing or using their output interchangeably.

  7. Machine characterization and benchmark performance prediction

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.

    1988-01-01

    From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.

  8. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  9. Coordinated Research Projects of the IAEA Atomic and Molecular Data Unit

    NASA Astrophysics Data System (ADS)

    Braams, B. J.; Chung, H.-K.

    2011-05-01

    The IAEA Atomic and Molecular Data Unit is dedicated to the provision of databases for atomic, molecular and plasma-material interaction (AM/PMI) data that are relevant for nuclear fusion research. IAEA Coordinated Research Projects (CRPs) are the principal mechanism by which the Unit encourages data evaluation and the production of new data. Ongoing and planned CRPs on AM/PMI data are briefly described here.

  10. Internal Benchmarking for Institutional Effectiveness

    ERIC Educational Resources Information Center

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  11. Developing integrated benchmarks for DOE performance measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance couldmore » be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.« less

  12. Benchmark for Strategic Performance Improvement.

    ERIC Educational Resources Information Center

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  13. Beyond Benchmarking: Value-Adding Metrics

    ERIC Educational Resources Information Center

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  14. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  15. How Benchmarking and Higher Education Came Together

    ERIC Educational Resources Information Center

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  16. Benchmarks--Standards Comparisons. Math Competencies: EFF Benchmarks Comparison [and] Reading Competencies: EFF Benchmarks Comparison [and] Writing Competencies: EFF Benchmarks Comparison.

    ERIC Educational Resources Information Center

    Kent State Univ., OH. Ohio Literacy Resource Center.

    This document is intended to show the relationship between Ohio's Standards and Competencies, Equipped for the Future's (EFF's) Standards and Components of Performance, and Ohio's Revised Benchmarks. The document is divided into three parts, with Part 1 covering mathematics instruction, Part 2 covering reading instruction, and Part 3 covering…

  17. GT3X+ accelerometer, Yamax pedometer and SC-StepMX pedometer step count accuracy in community-dwelling older adults.

    PubMed

    Webber, Sandra C; Magill, Sheila M; Schafer, Jenessa L; Wilson, Kaylie C S

    2014-07-01

    The purpose was to compare step count accuracy of an accelerometer (ActiGraph GT3X+), a mechanical pedometer (Yamax SW200), and a piezoelectric pedometer (SC-StepMX). Older adults (n = 13 with walking aids, n = 22 without; M = 81.5 years old, SD = 5.0) walked 100 m wearing the devices. Device-detected steps were compared with manually counted steps. We found no significant differences among monitors for those who walked without aids (p = .063). However, individuals who used walking aids exhibited slower gait speeds (M = 0.83 m/s, SD = 0.2) than non-walking aid users (M = 1.21 m/s, SD = 0.2, p < .001), and for them the SC-StepMX demonstrated a significantly lower percentage of error (Mdn = 1.0, interquartile range [IQR] = 0.5-2.0) than the other devices (Yamax SW200, Mdn = 68.9, IQR = 35.9-89.3; left GT3X+, Mdn = 52.0, IQR = 37.1-58.9; right GT3X+, Mdn = 51.0, IQR = 32.3-66.5; p < .05). These results support using a piezoelectric pedometer for measuring steps in older adults who use walking aids and who walk slowly.

  18. TPH2 -703G/T SNP may have important effect on susceptibility to suicidal behavior in major depression.

    PubMed

    Yoon, Ho-Kyoung; Kim, Yong-Ku

    2009-04-30

    Serotonergic system-related genes can be good candidate genes for both major depressive disorder (MDD) and suicidal behavior. In this study, we aimed to investigate the association of serotonin 2A receptor gene -1438A/G SNP (HTR2A -1438A/G), tryptophan hydroxylase 2 gene -703G/T SNP (TPH2 -703G/T) and serotonin 1A receptor C-1019G (HTR1A C-1019G) with suicidal behavior. One hundred and eighty one suicidal depressed patients and 143 non-suicidal depressed patients who met DSM-IV criteria for major depressive disorder were recruited from patients who were admitted to Korea University Ansan Hospital. One hundred seventy six normal controls were healthy volunteers who were recruited by local advertisement. Patients and normal controls were genotyped for HTR2A -1438A/G, TPH2 -703G/T and 5-HT1A C-1019G. The suicidal depressed patients were evaluated by the lethality of individual suicide attempts using Weisman and Worden's risk-rescue rating (RRR) and the Lethality Suicide Attempt Rating Scale-updated (LSARS-II). In order to assess the severity of depressive symptoms of patients, Hamilton's Depression Rating Scale (HDRS) was administered. Genotype and allele frequencies were compared between groups by chi(2) statistics. Association of genotype of the candidate genes with the lethality of suicidal behavior was examined with ANOVA by comparing the mean scores of LSARS and RRR according to the genotype. There were statistically significant differences in the genotype distributions and allele frequencies of TPH2 -703G/T between the suicidal depressive group and the normal control group. The homozygous allele G (G/G genotype) frequency was significantly higher in suicidal depressed patients than in controls. However, no differences in either genotype distribution or in allele frequencies of HTR2A -1438A/G and HTR1A C-1019G were observed between the suicidal depressed patients, the non-suicidal depressed patients, and the normal controls. There were no differences in the

  19. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  20. Benchmarking infrastructure for mutation text mining.

    PubMed

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  1. Benchmarking: A Process for Improvement.

    ERIC Educational Resources Information Center

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  2. Benchmarking in national health service procurement in Scotland.

    PubMed

    Walker, Scott; Masson, Ron; Telford, Ronnie; White, David

    2007-11-01

    The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland.

  3. Hospital benchmarking: are U.S. eye hospitals ready?

    PubMed

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added

  4. Benchmark Factors in Student Retention.

    ERIC Educational Resources Information Center

    Waggener, Anna T.; Smith, Constance K.

    The first purpose of this study was to identify significant factors affecting the first benchmark in retaining students in college--the decision to enroll in the first fall semester after orientation. The second purpose was to examine enrollment decisions at the second benchmark--the decision to re-enroll in the second fall semester after freshman…

  5. SP2Bench: A SPARQL Performance Benchmark

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg

    A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.

  6. IAEA activities related to radiation biology and health effects of radiation.

    PubMed

    Wondergem, Jan; Rosenblatt, Eduardo

    2012-03-01

    The IAEA is involved in capacity building with regard to the radiobiological sciences in its member states through its technical cooperation programme. Research projects/programmes are normally carried out within the framework of coordinated research projects (CRPs). Under this programme, two CRPs have been approved which are relevant to nuclear/radiation accidents: (1) stem cell therapeutics to modify radiation-induced damage to normal tissue, and (2) strengthening biological dosimetry in IAEA member states.

  7. Benchmark problems and solutions

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1995-01-01

    The scientific committee, after careful consideration, adopted six categories of benchmark problems for the workshop. These problems do not cover all the important computational issues relevant to Computational Aeroacoustics (CAA). The deciding factor to limit the number of categories to six was the amount of effort needed to solve these problems. For reference purpose, the benchmark problems are provided here. They are followed by the exact or approximate analytical solutions. At present, an exact solution for the Category 6 problem is not available.

  8. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...

  9. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...

  10. Comparison of Yamax pedometer and GT3X accelerometer steps in a free-living sample

    USDA-ARS?s Scientific Manuscript database

    Our objective was to compare steps detected by the Yamax pedometer (PEDO) versus the GT3X accelerometer (ACCEL) in free-living adults. Daily PEDO and ACCEL steps were collected from a sample of 23 overweight and obese participants (18 females; mean +/- sd: age = 52.6 +/- 8.4 yr.; body mass index = 3...

  11. Benchmarking specialty hospitals, a scoping review on theory and practice.

    PubMed

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  12. A benchmark for subduction zone modeling

    NASA Astrophysics Data System (ADS)

    van Keken, P.; King, S.; Peacock, S.

    2003-04-01

    Our understanding of subduction zones hinges critically on the ability to discern its thermal structure and dynamics. Computational modeling has become an essential complementary approach to observational and experimental studies. The accurate modeling of subduction zones is challenging due to the unique geometry, complicated rheological description and influence of fluid and melt formation. The complicated physics causes problems for the accurate numerical solution of the governing equations. As a consequence it is essential for the subduction zone community to be able to evaluate the ability and limitations of various modeling approaches. The participants of a workshop on the modeling of subduction zones, held at the University of Michigan at Ann Arbor, MI, USA in 2002, formulated a number of case studies to be developed into a benchmark similar to previous mantle convection benchmarks (Blankenbach et al., 1989; Busse et al., 1991; Van Keken et al., 1997). Our initial benchmark focuses on the dynamics of the mantle wedge and investigates three different rheologies: constant viscosity, diffusion creep, and dislocation creep. In addition we investigate the ability of codes to accurate model dynamic pressure and advection dominated flows. Proceedings of the workshop and the formulation of the benchmark are available at www.geo.lsa.umich.edu/~keken/subduction02.html We strongly encourage interested research groups to participate in this benchmark. At Nice 2003 we will provide an update and first set of benchmark results. Interested researchers are encouraged to contact one of the authors for further details.

  13. Analogous Gamow-Teller and M1 Transitions in Tz = ±½ Mirror Nuclei and in Tz = ±1, 0 Triplet Nuclei relevant to Low-energy Super GT state

    NASA Astrophysics Data System (ADS)

    Fujita, Yoshitaka; Fujita, Hirohiko; Tanumura, Yusuke

    2018-05-01

    Nuclei have spin- and isospin-degrees of freedom. Therefore, Gamow-Teller (GT) transitions caused by the στ operator (spin-isospin operator) are unique tools for the studies of nuclear structure as well as nuclear interactions. They can be studied in β decays as well as charge-exchange (CE) reactions. Similarly, M1 γ decays are mainly caused by the στ operator. Combined studies of these transitions caused by Weak, Strong, and Electro-Magnetic interactions provide us a deeper understanding of nuclear spin-isospin-type transitions. We first compare the strengths of analogous GT and M1 transitions in the A = 27, Tz = ±½ mirror nuclei 27Al and 27Si. The comparison is extended to the Tz = ±1, 0 nuclei. The strength of GT transition from the ground state (g.s.) of 42Ca to the 0.611 MeV first Jπ = 1+ state in 42Sc is compared with that of the analogous M1 transition from the 0.611 MeV state to the T = 1, 0+ g.s. (isobaric analog state: IAS) in 42Sc. The 0.611 MeV state has the property of Low-energy Super GT (LeSGT) state, because it carries the main part of the GT strength of all available transitions from the g.s. of 42Ca (and 42Ti) to the Jπ = 1+ GT states in 42Sc.

  14. XWeB: The XML Warehouse Benchmark

    NASA Astrophysics Data System (ADS)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  15. Benchmarking: your performance measurement and improvement tool.

    PubMed

    Senn, G F

    2000-01-01

    Many respected professional healthcare organizations and societies today are seeking to establish data-driven performance measurement strategies such as benchmarking. Clinicians are, however, resistant to "benchmarking" that is based on financial data alone, concerned that it may be adverse to the patients' best interests. Benchmarking of clinical procedures that uses physician's codes such as Current Procedural Terminology (CPTs) has greater credibility with practitioners. Better Performers, organizations that can perform procedures successfully at lower cost and in less time, become the "benchmark" against which other organizations can measure themselves. The Better Performers' strategies can be adopted by other facilities to save time or money while maintaining quality patient care.

  16. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    deWit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure, This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  17. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  18. The IAEA neutron coincidence counting (INCC) and the DEMING least-squares fitting programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krick, M.S.; Harker, W.C.; Rinard, P.M.

    1998-12-01

    Two computer programs are described: (1) the INCC (IAEA or International Neutron Coincidence Counting) program and (2) the DEMING curve-fitting program. The INCC program is an IAEA version of the Los Alamos NCC (Neutron Coincidence Counting) code. The DEMING program is an upgrade of earlier Windows{reg_sign} and DOS codes with the same name. The versions described are INCC 3.00 and DEMING 1.11. The INCC and DEMING codes provide inspectors with the software support needed to perform calibration and verification measurements with all of the neutron coincidence counting systems used in IAEA inspections for the nondestructive assay of plutonium and uranium.

  19. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  20. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  1. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  2. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...

  3. Endometrial cancer and somatic G>T KRAS transversion in patients with constitutional MUTYH biallelic mutations.

    PubMed

    Tricarico, Rossella; Bet, Paola; Ciambotti, Benedetta; Di Gregorio, Carmela; Gatteschi, Beatrice; Gismondi, Viviana; Toschi, Benedetta; Tonelli, Francesco; Varesco, Liliana; Genuardi, Maurizio

    2009-02-18

    MUTYH-associated polyposis (MAP) is an autosomal recessive condition predisposing to colorectal cancer, caused by constitutional biallelic mutations in the base excision repair (BER) gene MUTYH. Colorectal tumours from MAP patients display an excess of somatic G>T mutations in the APC and KRAS genes due to defective BER function. To date, few extracolonic manifestations have been observed in MAP patients, and the clinical spectrum of this condition is not yet fully established. Recently, one patient with a diagnosis of endometrial cancer and biallelic MUTYH mutations has been described. We here report on two additional unrelated MAP patients with biallelic MUTYH germline mutations who developed endometrioid endometrial carcinoma. The endometrial tumours were evaluated for PTEN, PIK3CA, KRAS, BRAF and CTNNB1 mutations. A G>T transversion at codon 12 of the KRAS gene was observed in one tumour. A single 1bp frameshift deletion of PTEN was observed in the same sample. Overall, these findings suggest that endometrial carcinoma is a phenotypic manifestations of MAP and that inefficient repair of oxidative damage can be involved in its pathogenesis.

  4. Training of interventional cardiologists in radiation protection--the IAEA's initiatives.

    PubMed

    Rehani, Madan M

    2007-01-08

    The International Atomic Energy Agency (IAEA) has initiated a major international initiative to train interventional cardiologists in radiation protection as a part of its International Action Plan on the radiological protection of patients. A simple programme of two days' training has been developed, covering possible and observed radiation effects among patients and staff, international standards, dose management techniques, examples of good and bad practice and examples indicating prevention of possible injuries as a result of good practice of radiation protection. The training material is freely available on CD from the IAEA. The IAEA has conducted two events in 2004 and 2005 and number of events are planned in 2006. The survey conducted among the cardiologists participating in these programmes indicates that over 80% of them were attending such a structured programme on radiation protection for the first time. As the magnitude of X-ray usage in cardiology grows to match that in interventional radiology, the standards of training on radiation effects, radiation physics and radiation protection in interventional cardiology should also match those in interventional radiology.

  5. HTR2A A-1438G/T102C polymorphisms predict negative symptoms performance upon aripiprazole treatment in schizophrenic patients.

    PubMed

    Chen, Shih-Fen; Shen, Yu-Chih; Chen, Chia-Hsiang

    2009-08-01

    Aripiprazole acts as a partial agonist at dopamine D2 and D3 and serotonin 1A receptors and as an antagonist at serotonin 2A receptors (HTR2A). Since aripiprazole acts as an antagonist at HTR2A, genetic variants of HTR2A may be important in explaining variability in response to aripiprazole. This study investigated whether the efficacy of aripiprazole can be predicted by functional HTR2A A-1438G/T102C polymorphisms (rs63311/rs6313) as modified by clinical factors in Han Chinese hospitalized patients with acutely exacerbated schizophrenia. After hospitalization, the patients (n = 128) were given a 4-week course of aripiprazole. Patients were genotyped for HTR2A A-1438G/T102C polymorphisms via the restriction fragment length polymorphism method. Clinical factors such as gender, age, duration of illness, education level, diagnostic subtype, and medication dosage were noted as well. The researchers measured psychopathology biweekly, using the Positive and Negative Syndrome Scale (PANSS). A mixed model regression approach (SAS Proc MIXED) was used to analyze the effects of genetic and clinical factors on PANSS performance after aripiprazole treatment. We found that the GG/CC genotype group of HTR2A A-1438G/T102C polymorphisms predicts poor aripiprazole response specifically for negative symptoms. In addition, the clinical factors, including dosage of aripiprazole, age, duration of illness, and diagnostic subtype, were found to influence PANSS performance after aripiprazole treatment. The data suggest HTR2A A-1438G/T102C polymorphisms may predict negative symptoms performance upon aripiprazole treatment in schizophrenic patients as modified by clinical factors.

  6. 40 CFR 141.172 - Disinfection profiling and benchmarking.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... benchmarking. 141.172 Section 141.172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving 10,000 or More People § 141.172 Disinfection profiling and benchmarking. (a... sanitary surveys conducted by the State. (c) Disinfection benchmarking. (1) Any system required to develop...

  7. Raising Quality and Achievement. A College Guide to Benchmarking.

    ERIC Educational Resources Information Center

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  8. Benchmarking forensic mental health organizations.

    PubMed

    Coombs, Tim; Taylor, Monica; Pirkis, Jane

    2011-04-01

    This paper describes the forensic mental health forums that were conducted as part of the National Mental Health Benchmarking Project (NMHBP). These forums encouraged participating organizations to compare their performance on a range of key performance indicators (KPIs) with that of their peers. Four forensic mental health organizations took part in the NMHBP. Representatives from these organizations attended eight benchmarking forums at which they documented their performance against previously agreed KPIs. They also undertook three special projects which explored some of the factors that might explain inter-organizational variation in performance. The inter-organizational range for many of the indicators was substantial. Observing this led participants to conduct the special projects to explore three factors which might help explain the variability - seclusion practices, delivery of community mental health services, and provision of court liaison services. The process of conducting the special projects gave participants insights into the practices and structures employed by their counterparts, and provided them with some important lessons for quality improvement. The forensic mental health benchmarking forums have demonstrated that benchmarking is feasible and likely to be useful in improving service performance and quality.

  9. How to Advance TPC Benchmarks with Dependability Aspects

    NASA Astrophysics Data System (ADS)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  10. A Methodology for Benchmarking Relational Database Machines,

    DTIC Science & Technology

    1984-01-01

    user benchmarks is to compare the multiple users to the best-case performance The data for each query classification coll and the performance...called a benchmark. The term benchmark originates from the markers used by sur - veyors in establishing common reference points for their measure...formatted databases. In order to further simplify the problem, we restrict our study to those DBMs which support the relational model. A sur - vey

  11. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  12. Expression of feeding-related peptide receptors mRNA in GT1-7 cell line and roles of leptin and orexins in control of GnRH secretion.

    PubMed

    Yang, Ying; Zhou, Li-bin; Liu, Shang-quan; Tang, Jing-feng; Li, Feng-yin; Li, Rong-ying; Song, Huai-dong; Chen, Ming-dao

    2005-08-01

    To investigate the expression of feeding-related peptide receptors mRNA in GT1-7 cell line and roles of leptin and orexins in the control of GnRH secretion. Receptors of bombesin3, cholecystokinin (CCK)-A, CCK-B, glucagon-like peptide (GLP)1, melanin-concentrating hormone (MCH)1, orexin1, orexin2, neuromedin-B, neuropeptide Y (NPY)1 and NPY5, neurotensin (NT)1, NT2, NT3, and leptin receptor long form mRNA in GT1-7 cells were detected by reversed transcriptase-polymerase chain reaction. GT1-7 cells were treated with leptin, orexin A and orexin B at a cohort of concentrations for different lengths of time, and GnRH in medium was determined by radioimmunoassay (RIA). Receptors of bombesin 3, CCK-B, GLP1, MCH1, orexin1, neuromedin-B, NPY1, NPY5, NT1, NT3, and leptin receptor long form mRNA were expressed in GT1-7 cells, of which, receptors of GLP1, neuromedin-B, NPY1, and NT3 were highly expressed. No amplified fragments of orexin2, NT2, and CCK-A receptor cDNA were generated with GT1-7 RNA, indicating that the GT1-7 cells did not express mRNA of them. Leptin induced a significant stimulation of GnRH release, the results being most significant at 0.1 nmol/L for 15 min. In contrast to other studies in hypothalamic explants, neither orexin A nor orexin B affected basal GnRH secretion over a wide range of concentrations ranging from 1 nmol/L to 500 nmol/Lat 15, 30, and 60 min. Feeding and reproductive function are closely linked. Many orexigenic and anorexigenic signals may control feeding behavior as well as alter GnRH secretion through their receptors on GnRH neurons.

  13. Protein Models Docking Benchmark 2

    PubMed Central

    Anishchenko, Ivan; Kundrotas, Petras J.; Tuzikov, Alexander V.; Vakser, Ilya A.

    2015-01-01

    Structural characterization of protein-protein interactions is essential for our ability to understand life processes. However, only a fraction of known proteins have experimentally determined structures. Such structures provide templates for modeling of a large part of the proteome, where individual proteins can be docked by template-free or template-based techniques. Still, the sensitivity of the docking methods to the inherent inaccuracies of protein models, as opposed to the experimentally determined high-resolution structures, remains largely untested, primarily due to the absence of appropriate benchmark set(s). Structures in such a set should have pre-defined inaccuracy levels and, at the same time, resemble actual protein models in terms of structural motifs/packing. The set should also be large enough to ensure statistical reliability of the benchmarking results. We present a major update of the previously developed benchmark set of protein models. For each interactor, six models were generated with the model-to-native Cα RMSD in the 1 to 6 Å range. The models in the set were generated by a new approach, which corresponds to the actual modeling of new protein structures in the “real case scenario,” as opposed to the previous set, where a significant number of structures were model-like only. In addition, the larger number of complexes (165 vs. 63 in the previous set) increases the statistical reliability of the benchmarking. We estimated the highest accuracy of the predicted complexes (according to CAPRI criteria), which can be attained using the benchmark structures. The set is available at http://dockground.bioinformatics.ku.edu. PMID:25712716

  14. [Do you mean benchmarking?].

    PubMed

    Bonnet, F; Solignac, S; Marty, J

    2008-03-01

    The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.

  15. [In vitro comparison of root canal preparation with step-back technique and GT rotary file--a nickel-titanium engine driven rotary instrument system].

    PubMed

    Krajczár, Károly; Tóth, Vilmos; Nyárády, Zoltán; Szabó, Gyula

    2005-06-01

    The aim of the authors' study was to compare the remaining root canal wall thickness and the preparation time of root canals, prepared either with step-back technique, or with GT Rotary File, an engine driven nickel-titanium rotary instrument system. Twenty extracted molars were decoronated. Teeth were divided in two groups. In Group 1 root canals were prepared with step-back technique. In Group 2 GT Rotary File System was utilized. Preoperative vestibulo-oral X-ray pictures were taken from all teeth with radiovisiograph (RVG). The final preparations at the mesiobuccal canals (MB) were performed with size #30 and palatinal/distal canals with size #40 instruments. Postoperative RVG pictures were taken ensuring the preoperative positioning. The working time was measured in seconds during each preparation. The authors also assessed the remaining root canal wall thickness at 3, 6 and 9 mm from the radiological apex, comparing the width of the canal walls of the vestibulo-oral projections on pre- and postoperative RVG pictures both mesially and buccally. The ratios of the residual and preoperative root canal wall thickness were calculated and compared. The largest difference was found at the MB canals of the coronal and middle third level of the root, measured on the distal canal wall. The ratio of the remaining dentin wall thickness at the coronal and the middle level in the case of step-back preparation was 0.605 and 0.754, and 0.824 and 0.895 in the cases of GT files respectively. The preparation time needed for GT Rotary File System was altogether 68.7% (MB) and 52.5% (D/P canals) of corresponding step-back preparation times. The use of GT Rotary File with comparison of standard step-back method resulted in a shortened preparation time and excessive damage of the coronal part of the root canal could be avoided.

  16. Benchmarking, Total Quality Management, and Libraries.

    ERIC Educational Resources Information Center

    Shaughnessy, Thomas W.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) in higher education and academic libraries focuses on the identification, collection, and use of reliable data. Methods for measuring quality, including benchmarking, are described; performance measures are considered; and benchmarking techniques are examined. (11 references) (MES)

  17. Radiation Detection Computational Benchmark Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing differentmore » techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to

  18. Association of heme oxygenase-1 GT-repeat polymorphism with blood pressure phenotypes and its relevance to future cardiovascular mortality risk: an observation based on arsenic-exposed individuals.

    PubMed

    Wu, Meei-Maan; Chiou, Hung-Yi; Chen, Chi-Ling; Hsu, Ling-I; Lien, Li-Ming; Wang, Chih-Hao; Hsieh, Yi-Chen; Wang, Yuan-Hung; Hsueh, Yu-Mei; Lee, Te-Chang; Cheng, Wen-Fang; Chen, Chien-Jen

    2011-12-01

    Heme oxygenase (HO)-1 is up-regulated as a cellular defense responding to stressful stimuli in experimental studies. A GT-repeat length polymorphism in the HO-1 gene promoter was inversely correlated to HO-1 induction. Here, we reported the association of GT-repeat polymorphism with blood pressure (BP) phenotypes, and their interaction on cardiovascular (CV) mortality risk in arsenic-exposed cohorts. Associations of GT-repeat polymorphism with BP phenotypes were investigated at baseline in a cross-sectional design. Effect of GT-repeat polymorphism on CV mortality was investigated in a longitudinal design stratified by hypertension. GT-repeat variants were grouped by S (<27 repeats) or L (≥ 27 repeats) alleles. Multivariate analyses were used to estimate the effect size after accounting for CV covariates. Totally, 894 participants were recruited and analyzed. At baseline, carriers with HO-1 S alleles had lower diastolic BP (L/S genotypes, P = 0.014) and a lower possibility of being hypertensive (L/S genotypes, P = 0.048). After follow-up, HO-1 S allele was significantly associated with a reduced CV risk in hypertensive participants [relative mortality ratio (RMR) 0.27 (CI 0.11, 0.69), P = 0.007] but not in normotensive. Hypertensive participants without carrying the S allele had a 5.23-fold increased risk [RMR 5.23 (CI 1.99, 13.69), P = 0.0008] of CV mortality compared with normotensive carrying the S alleles. HO-1 short GT-repeat polymorphism may play a protective role in BP regulation and CV mortality risk in hypertensive individuals against environmental stressors. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  19. Lower crustal section of the Oman Ophiolite drilled in Hole GT1A, ICDP Oman Drilling Project

    NASA Astrophysics Data System (ADS)

    Umino, S.; Kelemen, P. B.; Matter, J. M.; Coggon, J. A.; Takazawa, E.; Michibayashi, K.; Teagle, D. A. H.

    2017-12-01

    Hole GT1A (22° 53.535'N, 58° 30.904'E) was drilled by the Oman Drilling Project (OmDP) into GT1A of the Samail ophiolite, Oman. OmDP is an international collaboration supported by the International Continental Scientific Drilling Program, the Deep Carbon Observatory, NSF, IODP, JAMSTEC, and the European, Japanese, German and Swiss Science Foundations, with in-kind support in Oman from the Ministry of Regional Municipalities and Water Resources, Public Authority of Mining, Sultan Qaboos University, and the German University of Technology. Hole GT1A was diamond cored in 22 Jan to 08 Feb 2017 to a total depth of 403.05 m. The outer surfaces of the cores were imaged and described on site before being curated, boxed and shipped to the IODP drill ship Chikyu, where they underwent comprehensive visual and instrumental analysis. Hole GT1A drilled the lower crustal section in the southern Oman Ophiolite and recovered 401.52 m of total cores (99.6% recovery). The main lithology is dominated by olivine gabbro (65.9%), followed in abundance by olivine-bearing gabbro (21.5%) and olivine melagabbro (3.9%). Minor rock types are orthopyroxene-bearing olivine gabbro (2.4%), oxide-bearing olivine gabbro (1.5%), gabbro (1.1%), anorthositic gabbro (1%), troctolitic gabbro (0.8%); orthopyroxene-bearing gabbro (0.5%), gabbronorite (0.3%); and dunite (0.3%). These rocks are divided into Lithologic Unit I to VII at 26.62 m, 88.16 m, 104.72 m, 154.04 m, 215.22 m, 306.94 m in Chikyu Curated Depth in descending order; Unit I and II consist of medium-grained olivine gabbro with lower olivine abundance in Unit II. Unit III is medium-grained olivine melagabbros, marked by an increase in olivine. Unit IV is relatively homogenous medium-grained olivine gabbros with granular textures. Unit V is identified by the appearance of fine-grained gabbros, but the major rocktypes are medium grained olivine gabbros. Unit VI is medium-grained olivine gabbro, marked by appearance of orthopyroxene. Unit VII

  20. Benchmarking Helps Measure Union Programs, Operations.

    ERIC Educational Resources Information Center

    Mann, Jerry

    2001-01-01

    Explores three examples of benchmarking by college student unions. Focuses on how a union can collect information from other unions for use as benchmarking standards for the purposes of selling a concept or justifying program increases, or for comparing a union's financial performance to other unions. (EV)

  1. The U.S./IAEA Workshop on Software Sustainability for Safeguards Instrumentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pepper S. E.; .; Worrall, L.

    2014-08-08

    The U.S. National Nuclear Security Administration’s Next Generation Safeguards Initiative, the U.S. Department of State, and the International Atomic Energy Agency (IAEA) organized a a workshop on the subject of ”Software Sustainability for Safeguards Instrumentation.” The workshop was held at the Vienna International Centre in Vienna, Austria, May 6-8, 2014. The workshop participants included software and hardware experts from national laboratories, industry, government, and IAEA member states who were specially selected by the workshop organizers based on their experience with software that is developed for the control and operation of safeguards instrumentation. The workshop included presentations, to orient the participantsmore » to the IAEA Department of Safeguards software activities related to instrumentation data collection and processing, and case studies that were designed to inspire discussion of software development, use, maintenance, and upgrades in breakout sessions and to result in recommendations for effective software practices and management. This report summarizes the results of the workshop.« less

  2. Benchmark Study of Global Clean Energy Manufacturing | Advanced

    Science.gov Websites

    Manufacturing Research | NREL Benchmark Study of Global Clean Energy Manufacturing Benchmark Study of Global Clean Energy Manufacturing Through a first-of-its-kind benchmark study, the Clean Energy Technology End Product.' The study examined four clean energy technologies: wind turbine components

  3. Benchmarking: contexts and details matter.

    PubMed

    Zheng, Siyuan

    2017-07-05

    Benchmarking is an essential step in the development of computational tools. We take this opportunity to pitch in our opinions on tool benchmarking, in light of two correspondence articles published in Genome Biology.Please see related Li et al. and Newman et al. correspondence articles: www.dx.doi.org/10.1186/s13059-017-1256-5 and www.dx.doi.org/10.1186/s13059-017-1257-4.

  4. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  5. MoMaS reactive transport benchmark using PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Park, H.

    2017-12-01

    MoMaS benchmark was developed to enhance numerical simulation capability for reactive transport modeling in porous media. The benchmark was published in late September of 2009; it is not taken from a real chemical system, but realistic and numerically challenging tests. PFLOTRAN is a state-of-art massively parallel subsurface flow and reactive transport code that is being used in multiple nuclear waste repository projects at Sandia National Laboratories including Waste Isolation Pilot Plant and Used Fuel Disposition. MoMaS benchmark has three independent tests with easy, medium, and hard chemical complexity. This paper demonstrates how PFLOTRAN is applied to this benchmark exercise and shows results of the easy benchmark test case which includes mixing of aqueous components and surface complexation. Surface complexations consist of monodentate and bidentate reactions which introduces difficulty in defining selectivity coefficient if the reaction applies to a bulk reference volume. The selectivity coefficient becomes porosity dependent for bidentate reaction in heterogeneous porous media. The benchmark is solved by PFLOTRAN with minimal modification to address the issue and unit conversions were made properly to suit PFLOTRAN.

  6. Contribution to fusion research from IAEA coordinated research projects and joint experiments

    NASA Astrophysics Data System (ADS)

    Gryaznevich, M.; Van Oost, G.; Stöckel, J.; Kamendje, R.; Kuteev, B. N.; Melnikov, A.; Popov, T.; Svoboda, V.; The IAEA CRP Teams

    2015-10-01

    The paper presents objectives and activities of IAEA Coordinated Research Projects ‘Conceptual development of steady-state compact fusion neutron sources’ and ‘Utilisation of a network of small magnetic confinement fusion devices for mainstream fusion research’. The background and main projects of the CRP on FNS are described in detail, as this is a new activity at IAEA. Recent activities of the second CRP, which continues activities of previous CRPs, are overviewed.

  7. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  8. Analysis of historical delta values for IAEA/LANL NDA training courses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, William; Santi, Peter; Swinhoe, Martyn

    2009-01-01

    The Los Alamos National Laboratory (LANL) supports the International Atomic Energy Agency (IAEA) by providing training for IAEA inspectors in neutron and gamma-ray Nondestructive Assay (NDA) of nuclear material. Since 1980, all new IAEA inspectors attend this two week course at LANL gaining hands-on experience in the application of NDA techniques, procedures and analysis to measure plutonium and uranium nuclear material standards with well known pedigrees. As part of the course the inspectors conduct an inventory verification exercise. This exercise provides inspectors the opportunity to test their abilities in performing verification measurements using the various NDA techniques. For an inspector,more » the verification of an item is nominally based on whether the measured assay value agrees with the declared value to within three times the historical delta value. The historical delta value represents the average difference between measured and declared values from previous measurements taken on similar material with the same measurement technology. If the measurement falls outside a limit of three times the historical delta value, the declaration is not verified. This paper uses measurement data from five years of IAEA courses to calculate a historical delta for five non-destructive assay methods: Gamma-ray Enrichment, Gamma-ray Plutonium Isotopics, Passive Neutron Coincidence Counting, Active Neutron Coincidence Counting and the Neutron Coincidence Collar. These historical deltas provide information as to the precision and accuracy of these measurement techniques under realistic conditions.« less

  9. INF and IAEA: A comparative analysis of verification strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheinman, L.; Kratzer, M.

    1992-07-01

    This is the final report of a study on the relevance and possible lessons of Intermediate Range Nuclear Force (INF) verification to the International Atomic Energy Agency (IAEA) international safeguards activities.

  10. Benchmarking for Excellence and the Nursing Process

    NASA Technical Reports Server (NTRS)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  11. Toward Scalable Benchmarks for Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1996-01-01

    This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short- and long-term behavior. These benchmarks should be both portable and scalable so they may be used on storage systems from tens of gigabytes to petabytes or more. By developing a standard set of benchmarks that reflect real user workload, we hope to encourage system designers and users to publish performance figures that can be compared with those of other systems. This will allow users to choose the system that best meets their needs and give designers a tool with which they can measure the performance effects of improvements to their systems.

  12. Benchmarking and validation activities within JEFF project

    NASA Astrophysics Data System (ADS)

    Cabellos, O.; Alvarez-Velarde, F.; Angelone, M.; Diez, C. J.; Dyrda, J.; Fiorito, L.; Fischer, U.; Fleming, M.; Haeck, W.; Hill, I.; Ichou, R.; Kim, D. H.; Klix, A.; Kodeli, I.; Leconte, P.; Michel-Sendis, F.; Nunnenmann, E.; Pecchia, M.; Peneliau, Y.; Plompen, A.; Rochman, D.; Romojaro, P.; Stankovskiy, A.; Sublet, J. Ch.; Tamagno, P.; Marck, S. van der

    2017-09-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  13. GT-repeat polymorphism in the heme oxygenase-1 gene promoter is associated with cardiovascular mortality risk in an arsenic-exposed population in northeastern Taiwan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Meei-Maan, E-mail: mmwu@tmu.edu.t; Graduate Institute of Oncology, College of Medicine, National Taiwan University, Taipei, Taiwan; Graduate Institute of Basic Medicine, College of Medicine, Fu-Jen Catholic University, Taipei, Taiwan

    2010-11-01

    Inorganic arsenic has been associated with increased risk of atherosclerotic vascular disease and mortality in humans. A functional GT-repeat polymorphism in the heme oxygenase-1 (HO-1) gene promoter is inversely correlated with the development of coronary artery disease and restenosis after clinical angioplasty. The relationship of HO-1 genotype with arsenic-associated cardiovascular disease has not been studied. In this study, we evaluated the relationship between the HO-1 GT-repeat polymorphism and cardiovascular mortality in an arsenic-exposed population. A total of 504 study participants were followed up for a median of 10.7 years for occurrence of cardiovascular deaths (coronary heart disease, cerebrovascular disease, andmore » peripheral arterial disease). Cardiovascular risk factors and DNA samples for determination of HO-1 GT repeats were obtained at recruitment. GT repeats variants were grouped into the S (< 27 repeats) or L allele ({>=} 27 repeats). Relative mortality risk was estimated using Cox regression analysis, adjusted for competing risk of cancer and other causes. For the L/L, L/S, and S/S genotype groups, the crude mortalities for cardiovascular disease were 8.42, 3.10, and 2.85 cases/1000 person-years, respectively. After adjusting for conventional cardiovascular risk factors and competing risk of cancer and other causes, carriers with class S allele (L/S or S/S genotypes) had a significantly reduced risk of cardiovascular mortality compared to non-carriers (L/L genotype) [OR, 0.38; 95% CI, 0.16-0.90]. In contrast, no significant association was observed between HO-1 genotype and cancer mortality or mortality from other causes. Shorter (GT)n repeats in the HO-1 gene promoter may confer protective effects against cardiovascular mortality related to arsenic exposure.« less

  14. Benchmarking short sequence mapping tools

    PubMed Central

    2013-01-01

    Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764

  15. Benchmarking on Tsunami Currents with ComMIT

    NASA Astrophysics Data System (ADS)

    Sharghi vand, N.; Kanoglu, U.

    2015-12-01

    There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant

  16. Benchmarking child and adolescent mental health organizations.

    PubMed

    Brann, Peter; Walter, Garry; Coombs, Tim

    2011-04-01

    This paper describes aspects of the child and adolescent benchmarking forums that were part of the National Mental Health Benchmarking Project (NMHBP). These forums enabled participating child and adolescent mental health organizations to benchmark themselves against each other, with a view to understanding variability in performance against a range of key performance indicators (KPIs). Six child and adolescent mental health organizations took part in the NMHBP. Representatives from these organizations attended eight benchmarking forums at which they documented their performance against relevant KPIs. They also undertook two special projects designed to help them understand the variation in performance on given KPIs. There was considerable inter-organization variability on many of the KPIs. Even within organizations, there was often substantial variability over time. The variability in indicator data raised many questions for participants. This challenged participants to better understand and describe their local processes, prompted them to collect additional data, and stimulated them to make organizational comparisons. These activities fed into a process of reflection about their performance. Benchmarking has the potential to illuminate intra- and inter-organizational performance in the child and adolescent context.

  17. Benchmarks: The Development of a New Approach to Student Evaluation.

    ERIC Educational Resources Information Center

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  18. Colletotrichine A, a new sesquiterpenoid from Colletotrichum gloeosporioides GT-7, a fungal endophyte of Uncaria rhynchophylla.

    PubMed

    Chen, Xiao-Wei; Yang, Zhong-Duo; Sun, Jian-Hui; Song, Tong-Tong; Zhu, Bao-Ying; Zhao, Jun-Wen

    2018-04-01

    One new compound, Colletotrichine A (1), was produced by the fungal Colletotrichum gloeosporioides GT-7. The structure was established by 1D and 2D NMR spectra. Monoamine oxidase (MAO) and acetylcholinesterase (AChE) inhibitory activity of 1 was also evaluated. Compound 1 showed AChE-inhibiting activity with IC 50 value of 28 μg/mL.

  19. 42 CFR 457.430 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.430 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has...

  20. 42 CFR 457.430 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.430 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has...

  1. 42 CFR 457.430 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark-equivalent health benefits coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.430 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has...

  2. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark-equivalent health benefits coverage. 440.335 Section 440.335 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a...

  3. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Benchmark-equivalent health benefits coverage. 440.335 Section 440.335 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a...

  4. Taking the Battle Upstream: Towards a Benchmarking Role for NATO

    DTIC Science & Technology

    2012-09-01

    Benchmark.........................................................................................14 Figure 8. World Bank Benchmarking Work on Quality...Search of a Benchmarking Theory for the Public Sector.” 16     Figure 8. World Bank Benchmarking Work on Quality of Governance One of the most...the Ministries of Defense in the countries in which it works ). Another interesting innovation is that for comparison purposes, McKinsey categorized

  5. Human Health Benchmarks for Pesticides

    EPA Pesticide Factsheets

    Advanced testing methods now allow pesticides to be detected in water at very low levels. These small amounts of pesticides detected in drinking water or source water for drinking water do not necessarily indicate a health risk. The EPA has developed human health benchmarks for 363 pesticides to enable our partners to better determine whether the detection of a pesticide in drinking water or source waters for drinking water may indicate a potential health risk and to help them prioritize monitoring efforts.The table below includes benchmarks for acute (one-day) and chronic (lifetime) exposures for the most sensitive populations from exposure to pesticides that may be found in surface or ground water sources of drinking water. The table also includes benchmarks for 40 pesticides in drinking water that have the potential for cancer risk. The HHBP table includes pesticide active ingredients for which Health Advisories or enforceable National Primary Drinking Water Regulations (e.g., maximum contaminant levels) have not been developed.

  6. Feasibility study for SOFC-GT hybrid locomotive power: Part I. Development of a dynamic 3.5 MW SOFC-GT FORTRAN model

    NASA Astrophysics Data System (ADS)

    Martinez, Andrew S.; Brouwer, Jacob; Samuelsen, G. Scott

    2012-09-01

    This work presents the development of a dynamic SOFC-GT hybrid system model applied to a long-haul freight locomotive in operation. Given the expectations of the rail industry, the model is used to develop a preliminary analysis of the proposed system's operational capability on conventional diesel fuel as well as natural gas and hydrogen as potential fuels in the future. It is found that operation of the system on all three of these fuels is feasible with favorable efficiencies and reasonable dynamic response. The use of diesel fuel reformate in the SOFC presents a challenge to the electrochemistry, especially as it relates to control and optimization of the fuel utilization in the anode compartment. This is found to arise from the large amount of carbon monoxide in diesel reformate that is fed to the fuel cell, limiting the maximum fuel utilization possible. This presents an opportunity for further investigations into carbon monoxide electrochemical oxidation and/or system integration studies where the efficiency of the fuel reformer can be balanced against the needs of the SOFC.

  7. A new numerical benchmark of a freshwater lens

    NASA Astrophysics Data System (ADS)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  8. Evaluation of control strategies using an oxidation ditch benchmark.

    PubMed

    Abusam, A; Keesman, K J; Spanjers, H; van, Straten G; Meinema, K

    2002-01-01

    This paper presents validation and implementation results of a benchmark developed for a specific full-scale oxidation ditch wastewater treatment plant. A benchmark is a standard simulation procedure that can be used as a tool in evaluating various control strategies proposed for wastewater treatment plants. It is based on model and performance criteria development. Testing of this benchmark, by comparing benchmark predictions to real measurements of the electrical energy consumptions and amounts of disposed sludge for a specific oxidation ditch WWTP, has shown that it can (reasonably) be used for evaluating the performance of this WWTP. Subsequently, the validated benchmark was then used in evaluating some basic and advanced control strategies. Some of the interesting results obtained are the following: (i) influent flow splitting ratio, between the first and the fourth aerated compartments of the ditch, has no significant effect on the TN concentrations in the effluent, and (ii) for evaluation of long-term control strategies, future benchmarks need to be able to assess settlers' performance.

  9. Benchmarking to improve the quality of cystic fibrosis care.

    PubMed

    Schechter, Michael S

    2012-11-01

    Benchmarking involves the ascertainment of healthcare programs with most favorable outcomes as a means to identify and spread effective strategies for delivery of care. The recent interest in the development of patient registries for patients with cystic fibrosis (CF) has been fueled in part by an interest in using them to facilitate benchmarking. This review summarizes reports of how benchmarking has been operationalized in attempts to improve CF care. Although certain goals of benchmarking can be accomplished with an exclusive focus on registry data analysis, benchmarking programs in Germany and the United States have supplemented these data analyses with exploratory interactions and discussions to better understand successful approaches to care and encourage their spread throughout the care network. Benchmarking allows the discovery and facilitates the spread of effective approaches to care. It provides a pragmatic alternative to traditional research methods such as randomized controlled trials, providing insights into methods that optimize delivery of care and allowing judgments about the relative effectiveness of different therapeutic approaches.

  10. Upgrading the GT-2A aerogravimetric complex for airborne gravity measurements in the Arctic

    NASA Astrophysics Data System (ADS)

    Koneshov, V. N.; Klevtsov, V. V.; Solov'ev, V. N.

    2016-05-01

    The methodical solutions for improving the GT-2A aerogravimetric complexes by incorporating the Javad Quattro-G3D GPS receiver connected to four antennas spaced in two orthogonal planes are discussed. The operation features of the advanced aerogravimetric complex are described and the results of its application during the testing flight to 78° N latitude are presented. The anomalous gravity obtained in the testing flight is compared with the EGM2008 and EIGEN-6C models.

  11. 29 CFR 1952.153 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... further revision of its benchmarks to 64 safety inspectors and 50 industrial hygienists. After opportunity... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... benchmarks of 50 safety and 27 health compliance officers. After opportunity for public comment and service...

  12. 29 CFR 1952.153 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... further revision of its benchmarks to 64 safety inspectors and 50 industrial hygienists. After opportunity... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... benchmarks of 50 safety and 27 health compliance officers. After opportunity for public comment and service...

  13. 29 CFR 1952.153 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... further revision of its benchmarks to 64 safety inspectors and 50 industrial hygienists. After opportunity... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... benchmarks of 50 safety and 27 health compliance officers. After opportunity for public comment and service...

  14. 29 CFR 1952.153 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... further revision of its benchmarks to 64 safety inspectors and 50 industrial hygienists. After opportunity... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... benchmarks of 50 safety and 27 health compliance officers. After opportunity for public comment and service...

  15. 29 CFR 1952.153 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... further revision of its benchmarks to 64 safety inspectors and 50 industrial hygienists. After opportunity... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... benchmarks of 50 safety and 27 health compliance officers. After opportunity for public comment and service...

  16. Python/Lua Benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busby, L.

    This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.

  17. Criticality calculations of the Very High Temperature reactor Critical Assembly benchmark with Serpent and SCALE/KENO-VI

    DOE PAGES

    Bostelmann, Friederike; Hammer, Hans R.; Ortensi, Javier; ...

    2015-12-30

    Within the framework of the IAEA Coordinated Research Project on HTGR Uncertainty Analysis in Modeling, criticality calculations of the Very High Temperature Critical Assembly experiment were performed as the validation reference to the prismatic MHTGR-350 lattice calculations. Criticality measurements performed at several temperature points at this Japanese graphite-moderated facility were recently included in the International Handbook of Evaluated Reactor Physics Benchmark Experiments, and represent one of the few data sets available for the validation of HTGR lattice physics. Here, this work compares VHTRC criticality simulations utilizing the Monte Carlo codes Serpent and SCALE/KENO-VI. Reasonable agreement was found between Serpent andmore » KENO-VI, but only the use of the latest ENDF cross section library release, namely the ENDF/B-VII.1 library, led to an improved match with the measured data. Furthermore, the fourth beta release of SCALE 6.2/KENO-VI showed significant improvements from the current SCALE 6.1.2 version, compared to the experimental values and Serpent.« less

  18. Benchmarking image fusion system design parameters

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2013-06-01

    A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.

  19. The Arabidopsis Family GT43 Glycosyltransferases Form Two Functionally Nonredundant Groups Essential for the Elongation of Glucuronoxylan Backbone

    EPA Science Inventory

    There exist four members of family GT43 glycosyltransferases in the Arabidopsis (Arabidopsis thaliana) genome, and mutations of two of them, IRX9 and IRX14, have previously been shown to cause a defect in glucuronoxylan (GX) biosynthesis. However, it is currently unknown whether ...

  20. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGES

    Bess, John D.; Montierth, Leland; Köberl, Oliver; ...

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data aremore » greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  1. Benchmark problems for numerical implementations of phase field models

    DOE PAGES

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less

  2. IAEA Nuclear Data Section: provision of atomic and nuclear databases for user applications.

    PubMed

    Humbert, Denis P; Nichols, Alan L; Schwerer, Otto

    2004-01-01

    The Nuclear Data Section (NDS) of the International Atomic Energy Agency (IAEA) provides a wide range of atomic and nuclear data services to scientists worldwide, with particular emphasis placed on the needs of developing countries. Highly focused Co-ordinated Research Projects and multinational data networks are sponsored under the auspices of the IAEA for the development and assembly of databases through the organised participation of specialists from Member States. More than 100 data libraries are readily available cost-free through the Internet, CD-ROM and other media. These databases are used in a wide range of applications, including fission- and fusion-energy, non-energy applications and basic research studies. Further information concerning the various services can be found through the web address of the IAEA Nuclear Data Section: and a mirror site at IPEN, Brazil that is maintained by NDS staff:.

  3. Calibrated sulfur isotope abundance ratios of three IAEA sulfur isotope reference materials and V-CDT with a reassessment of the atomic weight of sulfur

    NASA Astrophysics Data System (ADS)

    Ding, T.; Valkiers, S.; Kipphardt, H.; De Bièvre, P.; Taylor, P. D. P.; Gonfiantini, R.; Krouse, R.

    2001-08-01

    Calibrated values have been obtained for sulfur isotope abundance ratios of sulfur isotope reference materials distributed by the IAEA (Vienna). For the calibration of the measurements, a set of synthetic isotope mixtures were prepared gravimetrically from high purity Ag2S materials enriched in32S, 33S, and 34S. All materials were converted into SF6 gas and subsequently, their sulfur isotope ratios were measured on the SF5+ species using a special gas source mass spectrometer equipped with a molecular flow inlet system (IRMM's Avogadro II amount comparator). Values for the 32S/34S abundance ratios are 22.650 4(20), 22.142 4(20), and 23.393 3(17) for IAEA-S-1, IAEA-S-2, and IAEA-S-3, respectively. The calculated 32S/34S abundance ratio for V-CDT is 22.643 6(20), which is very close to the calibrated ratio obtained by Ding et al. (1999). In this way, the zero point of the VCDT scale is anchored firmly to the international system of units SI. The 32S/33S abundance ratios are 126.942(47), 125.473(55), 129.072(32), and 126.948(47) for IAEA-S-1, IAEA-S-2, IAEA-S-3, and V-CDT, respectively. In this way, the linearity of the V-CDT scale is improved over this range. The values of the sulfur molar mass for IAEA-S-1 and V-CDT were calculated to be 32.063 877(56) and 32.063 911(56), respectively, the values with the smallest combined uncertainty ever reported for the sulfur molar masses (atomic weights).

  4. Using Benchmarking To Influence Tuition and Fee Decisions.

    ERIC Educational Resources Information Center

    Hubbell, Loren W. Loomis; Massa, Robert J.; Lapovsky, Lucie

    2002-01-01

    Discusses the use of benchmarking in managing enrollment. Using a case study, illustrates how benchmarking can help administrators develop strategies for planning and implementing admissions and pricing practices. (EV)

  5. Proficiency Testing as a tool to monitor consistency of measurements in the IAEA/WHO Network of Secondary Standards Dosimetry Laboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meghzifene, Ahmed; Czap, Ladislav; Shortt, Ken

    2008-08-14

    The International Atomic Energy Agency (IAEA) and the World Health Organization (WHO) established a Network of Secondary Standards Dosimetry Laboratories (IAEA/WHO SSDL Network) in 1976. Through SSDLs designated by Member States, the Network provides a direct link of national dosimetry standards to the international measurement system of standards traceable to the Bureau International des Poids et Mesures (BIPM). Within this structure and through the proper calibration of field instruments, the SSDLs disseminate S.I. quantities and units.To ensure that the services provided by SSDL members to end-users follow internationally accepted standards, the IAEA has set up two different comparison programmes. Onemore » programme relies on the IAEA/WHO postal TLD service and the other uses comparisons of calibrated ionization chambers to help the SSDLs verify the integrity of their national standards and the procedures used for the transfer of the standards to the end-users. The IAEA comparisons include {sup 60}Co air kerma (N{sub K}) and absorbed dose to water (N{sub D,W}) coefficients. The results of the comparisons are confidential and are communicated only to the participants. This is to encourage participation of the laboratories and their full cooperation in the reconciliation of any discrepancy.This work describes the results of the IAEA programme comparing calibration coefficients for radiotherapy dosimetry, using ionization chambers. In this programme, ionization chambers that belong to the SSDLs are calibrated sequentially at the SSDL, at the IAEA, and again at the SSDL. As part of its own quality assurance programme, the IAEA has participated in several regional comparisons organized by Regional Metrology Organizations.The results of the IAEA comparison programme show that the majority of SSDLs are capable of providing calibrations that fall inside the acceptance level of 1.5% compared to the IAEA.« less

  6. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    PubMed

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  7. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance... interoffice transmission using the telephone company's DS1 special access rates. (b) Initial transport rates...

  8. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance... interoffice transmission using the telephone company's DS1 special access rates. (b) Initial transport rates...

  9. Electric-Drive Vehicle Thermal Performance Benchmarking | Transportation

    Science.gov Websites

    studies are as follows: Characterize the thermal resistance and conductivity of various layers in the Research | NREL Electric-Drive Vehicle Thermal Performance Benchmarking Electric-Drive Vehicle Thermal Performance Benchmarking A photo of the internal components of an automotive inverter. NREL

  10. Validation of the Actigraph GT3X and ActivPAL Accelerometers for the Assessment of Sedentary Behavior

    ERIC Educational Resources Information Center

    Kim, Youngdeok; Barry, Vaughn W.; Kang, Minsoo

    2015-01-01

    This study examined (a) the validity of two accelerometers (ActiGraph GT3X [ActiGraph LLC, Pensacola, FL, USA] and activPAL [PAL Technologies Ltd., Glasgow, Scotland]) for the assessment of sedentary behavior; and (b) the variations in assessment accuracy by setting minimum sedentary bout durations against a proxy for direct observation using an…

  11. Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine

    2004-01-01

    We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.

  12. BENCHMARKING SUSTAINABILITY ENGINEERING EDUCATION

    EPA Science Inventory

    The goals of this project are to develop and apply a methodology for benchmarking curricula in sustainability engineering and to identify individuals active in sustainability engineering education.

  13. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  14. Proteomic and biochemical assays of glutathione-related proteins in susceptible and multiple herbicide resistant Avena fatua L.

    PubMed

    Burns, Erin E; Keith, Barbara K; Refai, Mohammed Y; Bothner, Brian; Dyer, William E

    2017-08-01

    Extensive herbicide usage has led to the evolution of resistant weed populations that cause substantial crop yield losses and increase production costs. The multiple herbicide resistant (MHR) Avena fatua L. populations utilized in this study are resistant to members of all selective herbicide families, across five modes of action, available for A. fatua control in U.S. small grain production, and thus pose significant agronomic and economic threats. Resistance to ALS and ACCase inhibitors is not conferred by target site mutations, indicating that non-target site resistance mechanisms are involved. To investigate the potential involvement of glutathione-related enzymes in the MHR phenotype, we used a combination of proteomic, biochemical, and immunological approaches to compare their constitutive activities in herbicide susceptible (HS1 and HS2) and MHR (MHR3 and MHR4) A. fatua plants. Proteomic analysis identified three tau and one phi glutathione S-transferases (GSTs) present at higher levels in MHR compared to HS plants, while immunoassays revealed elevated levels of lambda, phi, and tau GSTs. GST specific activity towards 1-chloro-2,4-dinitrobenzene was 1.2-fold higher in MHR4 than in HS1 plants and 1.3- and 1.2-fold higher in MHR3 than in HS1 and HS2 plants, respectively. However, GST specific activities towards fenoxaprop-P-ethyl and imazamethabenz-methyl were not different between untreated MHR and HS plants. Dehydroascorbate reductase specific activity was 1.4-fold higher in MHR than HS plants. Pretreatment with the GST inhibitor NBD-Cl did not affect MHR sensitivity to fenoxaprop-P-ethyl application, while the herbicide safener and GST inducer mefenpyr reduced the efficacy of low doses of fenoxaprop-P-ethyl on MHR4 but not MHR3 plants. Mefenpyr treatment also partially reduced the efficacy of thiencarbazone-methyl or mesosulfuron-methyl on MHR3 or MHR4 plants, respectively. Overall, the GSTs described here are not directly involved in enhanced rates of

  15. Sequoia Messaging Rate Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedley, Andrew

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8) with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected tomore » be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less

  16. Using a health promotion model to promote benchmarking.

    PubMed

    Welby, Jane

    2006-07-01

    The North East (England) Neonatal Benchmarking Group has been established for almost a decade and has researched and developed a substantial number of evidence-based benchmarks. With no firm evidence that these were being used or that there was any standardisation of neonatal care throughout the region, the group embarked on a programme to review the benchmarks and determine what evidence-based guidelines were needed to support standardisation. A health promotion planning model was used by one subgroup to structure the programme; it enabled all members of the sub group to engage in the review process and provided the motivation and supporting documentation for implementation of changes in practice. The need for a regional guideline development group to complement the activity of the benchmarking group is being addressed.

  17. The 1998 Australian external beam radiotherapy survey and IAEA/WHO TLD postal dose quality audit.

    PubMed

    Huntley, R; Izewska, J

    2000-03-01

    The results of an updated Australian survey of external beam radiotherapy centres are presented. Most of the centres provided most of the requested information. The relative caseloads of various linear accelerator photon and electron beams have not changed significantly since the previous survey in 1995. The mean age of Australian LINACs is 7.1 years and that of other radiotherapy machines is 14.7 years. Every Australian radiotherapy centre participated in a special run of the IAEA/WHO TLD postal dose quality audit program, which was provided for Australian centres by the IAEA and WHO in May 1998. The dose quoted by the centres was in nearly every case within 1.5% of the dose assessed by the IAEA. This is within the combined standard uncertainty of the IAEA TLD service (1.8%). The results confirm the accuracy and precision of radiotherapy dosimetry in Australia and the adequate dissemination of the Australian standards from ARL (now ARPANSA) to the centres. The Australian standards have recently been shown to agree with those of other countries to within 0.25% by comparison with the BIPM.

  18. IAEA CIELO Evaluation of Neutron-induced Reactions on 235U and 238U Targets

    DOE PAGES

    Capote, R.; Trkov, A.; Sin, M.; ...

    2018-02-01

    Evaluations of nuclear reaction data for the major uranium isotopes 238U and 235U were performed within the scope of the CIELO Project on the initiative of the OECD/NEA Data Bank under Working Party on Evaluation Co-operation (WPEC) Subgroup 40 coordinated by the IAEA Nuclear Data Section. Both the mean values and covariances are evaluated from 10 -5 eV up to 30 MeV. The resonance parameters of 238U and 235U were re-evaluated with the addition of newly available data to the existing experimental database. The evaluations in the fast neutron range are based on nuclear model calculations with the code EMPIRE–3.2more » Malta above the resonance range up to 30 MeV. 235U(n,f), 238U(n,f), and 238U(n,γ) cross sections and 235U(n th,f) prompt fission neutron spectrum (PFNS) were evaluated within the Neutron Standards project and are representative of the experimental state-of-the-art measurements. The Standards cross sections were matched in model calculations as closely as possible to guarantee a good predictive power for cross sections of competing neutron scattering channels. 235U(n,γ) cross section includes fluctuations observed in recent experiments. 235U(n,f) PFNS for incident neutron energies from 500 keV to 20 MeV were measured at Los Alamos Chi-Nu facility and re-evaluated using all available experimental data. While respecting the measured differential data, several compensating errors in previous evaluations were identified and removed so that the performance in integral benchmarks was restored or improved. Covariance matrices for 235U and 238U cross sections, angular distributions, spectra and neutron multiplicities were evaluated using the GANDR system that combines experimental data with model uncertainties. Unrecognized systematic uncertainties were considered in the uncertainty quantification for fission and capture cross sections above the thermal range, and for neutron multiplicities. Evaluated files were extensively benchmarked to ensure good

  19. IAEA CIELO Evaluation of Neutron-induced Reactions on 235U and 238U Targets

    NASA Astrophysics Data System (ADS)

    Capote, R.; Trkov, A.; Sin, M.; Pigni, M. T.; Pronyaev, V. G.; Balibrea, J.; Bernard, D.; Cano-Ott, D.; Danon, Y.; Daskalakis, A.; Goričanec, T.; Herman, M. W.; Kiedrowski, B.; Kopecky, S.; Mendoza, E.; Neudecker, D.; Leal, L.; Noguere, G.; Schillebeeckx, P.; Sirakov, I.; Soukhovitskii, E. S.; Stetcu, I.; Talou, P.

    2018-02-01

    Evaluations of nuclear reaction data for the major uranium isotopes 238U and 235U were performed within the scope of the CIELO Project on the initiative of the OECD/NEA Data Bank under Working Party on Evaluation Co-operation (WPEC) Subgroup 40 coordinated by the IAEA Nuclear Data Section. Both the mean values and covariances are evaluated from 10-5 eV up to 30 MeV. The resonance parameters of 238U and 235U were re-evaluated with the addition of newly available data to the existing experimental database. The evaluations in the fast neutron range are based on nuclear model calculations with the code EMPIRE-3.2 Malta above the resonance range up to 30 MeV. 235U(n,f), 238U(n,f), and 238U(n,γ) cross sections and 235U(nth,f) prompt fission neutron spectrum (PFNS) were evaluated within the Neutron Standards project and are representative of the experimental state-of-the-art measurements. The Standards cross sections were matched in model calculations as closely as possible to guarantee a good predictive power for cross sections of competing neutron scattering channels. 235U(n,γ) cross section includes fluctuations observed in recent experiments. 235U(n,f) PFNS for incident neutron energies from 500 keV to 20 MeV were measured at Los Alamos Chi-Nu facility and re-evaluated using all available experimental data. While respecting the measured differential data, several compensating errors in previous evaluations were identified and removed so that the performance in integral benchmarks was restored or improved. Covariance matrices for 235U and 238U cross sections, angular distributions, spectra and neutron multiplicities were evaluated using the GANDR system that combines experimental data with model uncertainties. Unrecognized systematic uncertainties were considered in the uncertainty quantification for fission and capture cross sections above the thermal range, and for neutron multiplicities. Evaluated files were extensively benchmarked to ensure good performance in

  20. IAEA CIELO Evaluation of Neutron-induced Reactions on 235U and 238U Targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capote, R.; Trkov, A.; Sin, M.

    Evaluations of nuclear reaction data for the major uranium isotopes 238U and 235U were performed within the scope of the CIELO Project on the initiative of the OECD/NEA Data Bank under Working Party on Evaluation Co-operation (WPEC) Subgroup 40 coordinated by the IAEA Nuclear Data Section. Both the mean values and covariances are evaluated from 10 -5 eV up to 30 MeV. The resonance parameters of 238U and 235U were re-evaluated with the addition of newly available data to the existing experimental database. The evaluations in the fast neutron range are based on nuclear model calculations with the code EMPIRE–3.2more » Malta above the resonance range up to 30 MeV. 235U(n,f), 238U(n,f), and 238U(n,γ) cross sections and 235U(n th,f) prompt fission neutron spectrum (PFNS) were evaluated within the Neutron Standards project and are representative of the experimental state-of-the-art measurements. The Standards cross sections were matched in model calculations as closely as possible to guarantee a good predictive power for cross sections of competing neutron scattering channels. 235U(n,γ) cross section includes fluctuations observed in recent experiments. 235U(n,f) PFNS for incident neutron energies from 500 keV to 20 MeV were measured at Los Alamos Chi-Nu facility and re-evaluated using all available experimental data. While respecting the measured differential data, several compensating errors in previous evaluations were identified and removed so that the performance in integral benchmarks was restored or improved. Covariance matrices for 235U and 238U cross sections, angular distributions, spectra and neutron multiplicities were evaluated using the GANDR system that combines experimental data with model uncertainties. Unrecognized systematic uncertainties were considered in the uncertainty quantification for fission and capture cross sections above the thermal range, and for neutron multiplicities. Evaluated files were extensively benchmarked to ensure good

  1. The US Support Program to IAEA Safeguards Priority of Training and Human Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Queirolo,A.

    2008-06-13

    The U.S. Support Program to IAEA Safeguards (USSP) priority of training and human resources is aimed at providing the Department of Safeguards with an appropriate mixture of regular staff and extrabudgetary experts who are qualified to meet the IAEA's technical needs and to provide personnel with appropriate instruction to improve the technical basis and specific skills needed to perform their job functions. The equipment and methods used in inspection activities are unique, complex, and evolving. New and experienced safeguards inspectors need timely and effective training to perform required tasks and to learn new skills prescribed by new safeguards policies ormore » agreements. The role of the inspector has changed from that of strictly an accountant to include that of a detective. New safeguards procedures are being instituted, and therefore, experienced inspectors must be educated on these new procedures. The USSP also recognizes the need for training safeguards support staff, particularly those who maintain and service safeguards equipment (SGTS), and those who perform information collection and analysis (SGIM). The USSP is committed to supporting the IAEA with training to ensure the effectiveness of all staff members and will continue to offer its assistance in the development and delivery of basic, refresher, and advanced training courses. This paper will discuss the USSP ongoing support in the area of training and IAEA staffing.« less

  2. IAEA activities in the area of partitioning and transmutation

    NASA Astrophysics Data System (ADS)

    Stanculescu, Alexander

    2006-06-01

    Four major challenges are facing the long-term development of nuclear energy: improvement of the economic competitiveness, meeting increasingly stringent safety requirements, adhering to the criteria of sustainable development, and public acceptance. Meeting the sustainability criteria is the driving force behind the topic of this paper. In this context, sustainability has two aspects: natural resources and waste management. IAEA's activities in the area of Partitioning and Transmutation (P&T) are mostly in response to the latter. While not involving the large quantities of gaseous products and toxic solid wastes associated with fossil fuels, radioactive waste disposal is today's dominant public acceptance issue. In fact, small waste quantities permit a rigorous confinement strategy, and mined geological disposal is the strategy followed by some countries. Nevertheless, political opposition arguing that this does not yet constitute a safe disposal technology has largely stalled these efforts. One of the primary reasons cited is the long life of many of the radioisotopes generated from fission. This concern has led to increased R&D efforts to develop a technology aimed at reducing the amount and radio-toxicity of long-lived radioactive waste through transmutation in fission reactors or sub-critical systems. In the frame of the Project on Technology Advances in Fast Reactors and Accelerator-Driven Systems (ADS), the IAEA initiated a number of activities on utilization of plutonium and transmutation of long-lived radioactive waste, ADS, and deuterium-tritium plasma-driven sub-critical systems. The paper presents past accomplishments, current status and planned activities of this IAEA project.

  3. In vitro evidence of glucose-induced toxicity in GnRH secreting neurons: high glucose concentrations influence GnRH secretion, impair cell viability, and induce apoptosis in the GT1-1 neuronal cell line.

    PubMed

    Pal, Lubna; Chu, Hsiao-Pai; Shu, Jun; Topalli, Ilir; Santoro, Nanette; Karkanias, George

    2007-10-01

    To evaluate for direct toxic effects of high glucose concentrations on cellular physiology in GnRH secreting immortalized GT1-1 neurons. Prospective experimental design. In vitro experimental model using a cell culture system. GT1-1 cells were cultured in replicates in media with two different glucose concentrations (450 mg/dL and 100 mg/dL, respectively) for varying time intervals (24, 48, and 72 hours). Effects of glucose concentrations on GnRH secretion by the GT1-1 neurons were evaluated using a static culture model. Cell viability, cellular apoptosis, and cell cycle events in GT1-1 neurons maintained in two different glucose concentrations were assessed by flow cytometry (fluorescence-activated cell sorter) using Annexin V-PI staining. Adverse influences of high glucose concentrations on GnRH secretion and cell viability were noted in cultures maintained in high glucose concentration (450 mg/dL) culture medium for varying time intervals. A significantly higher percentage of cells maintained in high glucose concentration medium demonstrated evidence of apoptosis by a fluorescence-activated cell sorter. We provide in vitro evidence of glucose-induced cellular toxicity in GnRH secreting GT1-1 neurons. Significant alterations in GnRH secretion, reduced cell viability, and a higher percentage of apoptotic cells were observed in GT1-1 cells maintained in high (450 mg/dL) compared with low (100 mg/dL) glucose concentration culture medium.

  4. Differences in levels of albumin, ALT, AST, γ-GT and creatinine in frail, moderately healthy and healthy elderly individuals.

    PubMed

    Edvardsson, Maria; Sund-Levander, Märtha; Milberg, Anna; Wressle, Ewa; Marcusson, Jan; Grodzinsky, Ewa

    2018-02-23

    Reference intervals are widely used as decision tools, providing the physician with information about whether the analyte values indicate ongoing disease process. Reference intervals are generally based on individuals without diagnosed diseases or use of medication, which often excludes elderly. The aim of the study was to assess levels of albumin, alanine aminotransferase (ALT), aspartate aminotransferase (AST), creatinine and γ-glutamyl transferase (γ-GT) in frail, moderately healthy and healthy elderly indivuduals. Blood samples were collected from individuals >80 years old, nursing home residents, in the Elderly in Linköping Screening Assessment and Nordic Reference Interval Project, a total of 569 individuals. They were divided into three cohorts: frail, moderately healthy and healthy, depending on cognitive and physical function. Albumin, ALT, AST, creatinine and γ-GT were analyzed using routine methods. Linear regression predicted factors for 34% of the variance in albumin were activities of daily living (ADL), gender, stroke and cancer. ADLs, gender and weight explained 15% of changes in ALT. For AST levels, ADLs, cancer and analgesics explained 5% of changes. Kidney disease, gender, Mini Mental State Examination (MMSE) and chronic obstructive pulmonary disease explained 25% of the variation in creatinine levels and MMSE explained three per cent of γ-GT variation. Because a group of people are at the same age, they should not be assessed the same way. To interpret results of laboratory tests in elderly is a complex task, where reference intervals are one part, but far from the only one, to take into consideration.

  5. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    EPA Pesticide Factsheets

    EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate genera and salinity (measured as conductivity) and from that relationship derives a freshwater aquatic life benchmark. This benchmark of 300 µS/cm may be applied to waters in Appalachian streams that are dominated by calcium and magnesium salts of sulfate and bicarbonate at circum-neutral to mildly alkaline pH. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  6. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather; Robinson, William H.; Rech, Paolo

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  7. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE PAGES

    Quinn, Heather; Robinson, William H.; Rech, Paolo; ...

    2015-12-17

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  8. Standardised Benchmarking in the Quest for Orthologs

    PubMed Central

    Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe

    2016-01-01

    The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  9. Benchmarking CRISPR on-target sgRNA design.

    PubMed

    Yan, Jifang; Chuai, Guohui; Zhou, Chi; Zhu, Chenyu; Yang, Jing; Zhang, Chao; Gu, Feng; Xu, Han; Wei, Jia; Liu, Qi

    2017-02-15

    CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats)-based gene editing has been widely implemented in various cell types and organisms. A major challenge in the effective application of the CRISPR system is the need to design highly efficient single-guide RNA (sgRNA) with minimal off-target cleavage. Several tools are available for sgRNA design, while limited tools were compared. In our opinion, benchmarking the performance of the available tools and indicating their applicable scenarios are important issues. Moreover, whether the reported sgRNA design rules are reproducible across different sgRNA libraries, cell types and organisms remains unclear. In our study, a systematic and unbiased benchmark of the sgRNA predicting efficacy was performed on nine representative on-target design tools, based on six benchmark data sets covering five different cell types. The benchmark study presented here provides novel quantitative insights into the available CRISPR tools. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Algorithm and Architecture Independent Benchmarking with SEAK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, andmore » weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less

  11. Quality management benchmarking: FDA compliance in pharmaceutical industry.

    PubMed

    Jochem, Roland; Landgraf, Katja

    2010-01-01

    By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.

  12. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    NASA Astrophysics Data System (ADS)

    Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki

    2017-09-01

    There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  13. 78 FR 20386 - Notice of Receipt of Petition for Decision That Nonconforming 2012 Porsche GT3RS Passenger Cars...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-04

    ... Passenger Cars Are Eligible for Importation AGENCY: National Highway Traffic Safety Administration, DOT... passenger cars that were not originally manufactured to comply with all applicable Federal Motor Vehicle...-006) has petitioned NHTSA to decide whether nonconforming 2012 Porsche GT3RS passenger cars are...

  14. Cross-industry benchmarking: is it applicable to the operating room?

    PubMed

    Marco, A P; Hart, S

    2001-01-01

    The use of benchmarking has been growing in nonmedical industries. This concept is being increasingly applied to medicine as the industry strives to improve quality and improve financial performance. Benchmarks can be either internal (set by the institution) or external (use other's performance as a goal). In some industries, benchmarking has crossed industry lines to identify breakthroughs in thinking. In this article, we examine whether the airline industry can be used as a source of external process benchmarking for the operating room.

  15. Petrography and geochemistry of precambrian rocks from GT-2 and EE-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laughlin, A.W.; Eddy, A.

    1977-08-01

    During the drilling of GT-2 and EE-1, 27 cores totaling about 35 m were collected from the Precambrian section. Samples of each different lithology in each core were taken for petrographic and whole-rock major- and trace-element analyses. Whole-rock analyses are now completed on 37 samples. From these data four major Precambrian units were identified at the Fenton Hill site. Geophysical logs and cuttings were used to extrapolate between cores. The most abundant rock type is an extremely variable gneissic unit comprising about 75% of the rock penetrated. This rock is strongly foliated and may range compositionally from syenogranitic to tonaliticmore » over a few centimeters. The bulk of the unit falls within the monzogranite field. Interlayered with the gneiss is a ferrohastingsite-biotite schist which compositionally resembles a basaltic andesite. A fault contact between the schist and gneiss was observed in one core. Intrusive into this metamorphic complex are two igneous rocks. A leucocratic monzogranite occurs as at least two 15-m-thick dikes, and a biotite-granodiorite body was intercepted by 338 m of drill hole. Both rocks are unfoliated and equigranular. The biotite granodiorite is very homogeneous and is characterized by high modal contents of biotite and sphene and by high K/sub 2/O, TiO/sub 2/, and P/sub 2/O/sub 5/ contents. Although all of the cores examined show fractures, most of these are tightly sealed or healed. Calcite is the most abundant fracture filling mineral, but epidote, quartz, chlorite, clays or sulfides have also been observed. The degree of alteration of the essential minerals normally increases as these fractures are approached. The homogeneity of the biotite granodiorite at the bottom of GT-2 and the high degree of fracture filling ensure an ideal setting for the Hot Dry Rock Experiment.« less

  16. Estimating Energy Expenditure with ActiGraph GT9X Inertial Measurement Unit.

    PubMed

    Hibbing, Paul R; Lamunion, Samuel R; Kaplan, Andrew S; Crouter, Scott E

    2018-05-01

    The purpose of this study was to explore whether gyroscope and magnetometer data from the ActiGraph GT9X improved accelerometer-based predictions of energy expenditure (EE). Thirty participants (mean ± SD: age, 23.0 ± 2.3 yr; body mass index, 25.2 ± 3.9 kg·m) volunteered to complete the study. Participants wore five GT9X monitors (right hip, both wrists, and both ankles) while performing 10 activities ranging from rest to running. A Cosmed K4b was worn during the trial, as a criterion measure of EE (30-s averages) expressed in METs. Triaxial accelerometer data (80 Hz) were converted to milli-G using Euclidean norm minus one (ENMO; 1-s epochs). Gyroscope data (100 Hz) were expressed as a vector magnitude (GVM) in degrees per second (1-s epochs) and magnetometer data (100 Hz) were expressed as direction changes per 5 s. Minutes 4-6 of each activity were used for analysis. Three two-regression algorithms were developed for each wear location: 1) ENMO, 2) ENMO and GVM, and 3) ENMO, GVM, and direction changes. Leave-one-participant-out cross-validation was used to evaluate the root mean square error (RMSE) and mean absolute percent error (MAPE) of each algorithm. Adding gyroscope to accelerometer-only algorithms resulted in RMSE reductions between 0.0 METs (right wrist) and 0.17 METs (right ankle), and MAPE reductions between 0.1% (right wrist) and 6.0% (hip). When direction changes were added, RMSE changed by ≤0.03 METs and MAPE by ≤0.21%. The combined use of gyroscope and accelerometer at the hip and ankles improved individual-level prediction of EE compared with accelerometer only. For the wrists, adding gyroscope produced negligible changes. The magnetometer did not meaningfully improve estimates for any algorithms.

  17. Testing the validity of the International Atomic Energy Agency (IAEA) safety culture model.

    PubMed

    López de Castro, Borja; Gracia, Francisco J; Peiró, José M; Pietrantoni, Luca; Hernández, Ana

    2013-11-01

    This paper takes the first steps to empirically validate the widely used model of safety culture of the International Atomic Energy Agency (IAEA), composed of five dimensions, further specified by 37 attributes. To do so, three independent and complementary studies are presented. First, 290 students serve to collect evidence about the face validity of the model. Second, 48 experts in organizational behavior judge its content validity. And third, 468 workers in a Spanish nuclear power plant help to reveal how closely the theoretical five-dimensional model can be replicated. Our findings suggest that several attributes of the model may not be related to their corresponding dimensions. According to our results, a one-dimensional structure fits the data better than the five dimensions proposed by the IAEA. Moreover, the IAEA model, as it stands, seems to have rather moderate content validity and low face validity. Practical implications for researchers and practitioners are included. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. The national hydrologic bench-mark network

    USGS Publications Warehouse

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  19. Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results

    NASA Technical Reports Server (NTRS)

    Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.

  20. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  1. Effect of modified pectin molecules on the growth of bone cells.

    PubMed

    Kokkonen, Hanna E; Ilvesaro, Joanna M; Morra, Marco; Schols, Henk A; Tuukkanen, Juha

    2007-02-01

    The aim of this study was to investigate molecular candidates for bone implant nanocoatings, which could improve biocompatibility of implant materials. Primary rat bone cells and murine preosteoblastic MC3T3-E1 cells were cultured on enzymatically modified hairy regions (MHR-A and MHR-B) of apple pectins. MHRs were covalently attached to tissue culture polystyrene (TCPS) or glass. Uncoated substrata or bone slices were used as controls. Cell attachment, proliferation, and differentiation were investigated with fluorescence and confocal microscopy. Bone cells seem to prefer MHR-B coating to MHR-A coating. On MHR-A samples, the overall numbers as well as proportions of active osteoclasts were diminished compared to those on MHR-B, TCPS, or bone. Focal adhesions indicating attachment of the osteoblastic cells were detected on MHR-B and uncoated controls but not on MHR-A. These results demonstrate the possibility to modify surfaces with pectin nanocoatings.

  2. Length of stay benchmarks for inpatient rehabilitation after stroke.

    PubMed

    Meyer, Matthew; Britt, Eileen; McHale, Heather A; Teasell, Robert

    2012-01-01

    In Canada, no standardized benchmarks for length of stay (LOS) have been established for post-stroke inpatient rehabilitation. This paper describes the development of a severity specific median length of stay benchmarking strategy, assessment of its impact after one year of implementation in a Canadian rehabilitation hospital, and establishment of updated benchmarks that may be useful for comparison with other facilities across Canada. Patient data were retrospectively assessed for all patients admitted to a single post-acute stroke rehabilitation unit in Ontario, Canada between April 2005 and March 2008. Rehabilitation Patient Groups (RPGs) were used to establish stratified median length of stay benchmarks for each group that were incorporated into team rounds beginning in October 2009. Benchmark impact was assessed using mean LOS, FIM(®) gain, and discharge destination for each RPG group, collected prospectively for one year, compared against similar information from the previous calendar year. Benchmarks were then adjusted accordingly for future use. Between October 2009 and September 2010, a significant reduction in average LOS was noted compared to the previous year (35.3 vs. 41.2 days; p < 0.05). Reductions in LOS were noted in each RPG group including statistically significant reductions in 4 of the 7 groups. As intended, reductions in LOS were achieved with no significant reduction in mean FIM(®) gain or proportion of patients discharged home compared to the previous year. Adjusted benchmarks for LOS ranged from 13 to 48 days depending on the RPG group. After a single year of implementation, severity specific benchmarks helped the rehabilitation team reduce LOS while maintaining the same levels of functional gain and achieving the same rate of discharge to the community. © 2012 Informa UK, Ltd.

  3. Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ye; Ma, Xiaosong; Liu, Qing Gary

    2015-01-01

    Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less

  4. Benchmarking biology research organizations using a new, dedicated tool.

    PubMed

    van Harten, Willem H; van Bokhorst, Leonard; van Luenen, Henri G A M

    2010-02-01

    International competition forces fundamental research organizations to assess their relative performance. We present a benchmark tool for scientific research organizations where, contrary to existing models, the group leader is placed in a central position within the organization. We used it in a pilot benchmark study involving six research institutions. Our study shows that data collection and data comparison based on this new tool can be achieved. It proved possible to compare relative performance and organizational characteristics and to generate suggestions for improvement for most participants. However, strict definitions of the parameters used for the benchmark and a thorough insight into the organization of each of the benchmark partners is required to produce comparable data and draw firm conclusions.

  5. EPA and EFSA approaches for Benchmark Dose modeling

    EPA Science Inventory

    Benchmark dose (BMD) modeling has become the preferred approach in the analysis of toxicological dose-response data for the purpose of deriving human health toxicity values. The software packages most often used are Benchmark Dose Software (BMDS, developed by EPA) and PROAST (de...

  6. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    ERIC Educational Resources Information Center

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  7. 40 CFR 141.543 - How is the disinfection benchmark calculated?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false How is the disinfection benchmark... Disinfection-Systems Serving Fewer Than 10,000 People Disinfection Benchmark § 141.543 How is the disinfection benchmark calculated? If your system is making a significant change to its disinfection practice, it must...

  8. 40 CFR 141.543 - How is the disinfection benchmark calculated?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false How is the disinfection benchmark... Disinfection-Systems Serving Fewer Than 10,000 People Disinfection Benchmark § 141.543 How is the disinfection benchmark calculated? If your system is making a significant change to its disinfection practice, it must...

  9. 40 CFR 141.543 - How is the disinfection benchmark calculated?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 24 2013-07-01 2013-07-01 false How is the disinfection benchmark... Disinfection-Systems Serving Fewer Than 10,000 People Disinfection Benchmark § 141.543 How is the disinfection benchmark calculated? If your system is making a significant change to its disinfection practice, it must...

  10. 40 CFR 141.543 - How is the disinfection benchmark calculated?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 23 2011-07-01 2011-07-01 false How is the disinfection benchmark... Disinfection-Systems Serving Fewer Than 10,000 People Disinfection Benchmark § 141.543 How is the disinfection benchmark calculated? If your system is making a significant change to its disinfection practice, it must...

  11. Benchmarking can add up for healthcare accounting.

    PubMed

    Czarnecki, M T

    1994-09-01

    In 1993, a healthcare accounting and finance benchmarking survey of hospital and nonhospital organizations gathered statistics about key common performance areas. A low response did not allow for statistically significant findings, but the survey identified performance measures that can be used in healthcare financial management settings. This article explains the benchmarking process and examines some of the 1993 study's findings.

  12. Benchmarks for Evaluation of Distributed Denial of Service (DDOS)

    DTIC Science & Technology

    2008-01-01

    publications: [1] E. Arikan , Attack Profiling for DDoS Benchmarks, MS Thesis, University of Delaware, August 2006. [2] J. Mirkovic, A. Hussain, B. Wilson...Sigmetrics 2007, June 2007 [5] J. Mirkovic, E. Arikan , S. Wei, S. Fahmy, R. Thomas, and P. Reiher Benchmarks for DDoS Defense Evaluation, Proceedings of the...Security Experimentation, June 2006. [9] J. Mirkovic, E. Arikan , S. Wei, S. Fahmy, R. Thomas, P. Reiher, Benchmarks for DDoS Defense Evaluation

  13. Benchmark matrix and guide: Part II.

    PubMed

    1991-01-01

    In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.

  14. A Competitive Benchmarking Study of Noncredit Program Administration.

    ERIC Educational Resources Information Center

    Alstete, Jeffrey W.

    1996-01-01

    A benchmarking project to measure administrative processes and financial ratios received 57 usable replies from 300 noncredit continuing education programs. Programs with strong financial surpluses were identified and their processes benchmarked (including response to inquiries, registrants, registrant/staff ratio, new courses, class size,…

  15. The Learning Organisation: Results of a Benchmarking Study.

    ERIC Educational Resources Information Center

    Zairi, Mohamed

    1999-01-01

    Learning in corporations was assessed using these benchmarks: core qualities of creative organizations, characteristic of organizational creativity, attributes of flexible organizations, use of diversity and conflict, creative human resource management systems, and effective and successful teams. These benchmarks are key elements of the learning…

  16. Surveys and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  17. IAEA Coordinated Research Project on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, Gerhard; Bostelmann, F.

    (CRP) on the HTGR Uncertainty Analysis in Modelling (UAM) be implemented. This CRP is a continuation of the previous IAEA and Organization for Economic Co-operation and Development (OECD)/Nuclear Energy Agency (NEA) international activities on Verification and Validation (V&V) of available analytical capabilities for HTGR simulation for design and safety evaluations. Within the framework of these activities different numerical and experimental benchmark problems were performed and insight was gained about specific physics phenomena and the adequacy of analysis methods.« less

  18. The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool

    EPA Pesticide Factsheets

    Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.

  19. The Mailbox Computer System for the IAEA verification experiment on HEU downlending at the Portsmouth Gaseous Diffusion Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aronson, A.L.; Gordon, D.M.

    IN APRIL 1996, THE UNITED STATES (US) ADDED THE PORTSMOUTH GASEOUS DIFFUSION PLANT TO THE LIST OF FACILITIES ELIGIBLE FOR THE APPLICATION OF INTERNATIONAL ATOMIC ENERGY AGENCY (IAEA) SAFEGUARDS. AT THAT TIME, THE US PROPOSED THAT THE IAEA CARRY OUT A ''VERIFICATION EXPERIMENT'' AT THE PLANT WITH RESPECT TO DOOWNBLENDING OF ABOUT 13 METRIC TONS OF HIGHLY ENRICHED URANIUM (HEU) IN THE FORM OF URANIUM HEXAFLUROIDE (UF6). DURING THE PERIOD DECEMBER 1997 THROUGH JULY 1998, THE IAEA CARRIED OUT THE REQUESTED VERIFICATION EXPERIMENT. THE VERIFICATION APPROACH USED FOR THIS EXPERIMENT INCLUDED, AMONG OTHER MEASURES, THE ENTRY OF PROCESS-OPERATIONAL DATA BYmore » THE FACILITY OPERATOR ON A NEAR-REAL-TIME BASIS INTO A ''MAILBOX'' COMPUTER LOCATED WITHIN A TAMPER-INDICATING ENCLOSURE SEALED BY THE IAEA.« less

  20. Developing a benchmark for emotional analysis of music

    PubMed Central

    Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER. PMID:28282400

  1. Decoys Selection in Benchmarking Datasets: Overview and Perspectives

    PubMed Central

    Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Lagarde, Nathalie; Montes, Matthieu

    2018-01-01

    Virtual Screening (VS) is designed to prospectively help identifying potential hits, i.e., compounds capable of interacting with a given target and potentially modulate its activity, out of large compound collections. Among the variety of methodologies, it is crucial to select the protocol that is the most adapted to the query/target system under study and that yields the most reliable output. To this aim, the performance of VS methods is commonly evaluated and compared by computing their ability to retrieve active compounds in benchmarking datasets. The benchmarking datasets contain a subset of known active compounds together with a subset of decoys, i.e., assumed non-active molecules. The composition of both the active and the decoy compounds subsets is critical to limit the biases in the evaluation of the VS methods. In this review, we focus on the selection of decoy compounds that has considerably changed over the years, from randomly selected compounds to highly customized or experimentally validated negative compounds. We first outline the evolution of decoys selection in benchmarking databases as well as current benchmarking databases that tend to minimize the introduction of biases, and secondly, we propose recommendations for the selection and the design of benchmarking datasets. PMID:29416509

  2. Developing a benchmark for emotional analysis of music.

    PubMed

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  3. A large-scale benchmark of gene prioritization methods.

    PubMed

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  4. Comparison of the conformation of an oligonucleotide containing a central G-T base pair with the non-mismatch sequence by proton NMR.

    PubMed Central

    Quignard, E; Fazakerley, G V; van der Marel, G; van Boom, J H; Guschlbauer, W

    1987-01-01

    We have recorded NOESY spectra of two non-selfcomplementary undecanucleotide duplexes. From the observed NOEs we do not detect any significant distortion of the helix when a G-C pair is replaced by a G-T pair and the normal interresidue connectivities can be followed through the mismatch site. We conclude that the 2D spectra of the non-exchangeable protons do not allow differentiation between a wobble or rare tautomer form for the mismatch. NOE measurements in H2O, however, clearly show that the mismatch adopts a wobble structure and give information on the hydration in the minor groove for the G-T base pair which is embedded between two A-T base pairs in the sequence. PMID:3033602

  5. Circadian gene expression regulates pulsatile gonadotropin-releasing hormone (GnRH) secretory patterns in the hypothalamic GnRH-secreting GT1-7 cell line.

    PubMed

    Chappell, Patrick E; White, Rachel S; Mellon, Pamela L

    2003-12-03

    Although it has long been established that episodic secretion of gonadotropin-releasing hormone (GnRH) from the hypothalamus is required for normal gonadotropin release, the molecular and cellular mechanisms underlying the synchronous release of GnRH are primarily unknown. We used the GT1-7 mouse hypothalamic cell line as a model for GnRH secretion, because these cells release GnRH in a pulsatile pattern similar to that observed in vivo. To explore possible molecular mechanisms governing secretory timing, we investigated the role of the molecular circadian clock in regulation of GnRH secretion. GT1-7 cells express many known core circadian clock genes, and we demonstrate that oscillations of these components can be induced by stimuli such as serum and the adenylyl cyclase activator forskolin, similar to effects observed in fibroblasts. Strikingly, perturbation of circadian clock function in GT1-7 cells by transient expression of the dominant-negative Clock-Delta19 gene disrupts normal ultradian patterns of GnRH secretion, significantly decreasing mean pulse frequency. Additionally, overexpression of the negative limb clock gene mCry1 in GT1-7 cells substantially increases GnRH pulse amplitude without a commensurate change in pulse frequency, demonstrating that an endogenous biological clock is coupled to the mechanism of neurosecretion in these cells and can regulate multiple secretory parameters. Finally, mice harboring a somatic mutation in the Clock gene are subfertile and exhibit a substantial increase in estrous cycle duration as revealed by examination of vaginal cytology. This effect persists in normal light/dark (LD) cycles, suggesting that a suprachiasmatic nucleus-independent endogenous clock in GnRH neurons is required for eliciting normal pulsatile patterns of GnRH secretion.

  6. Tethyan Anhydrite Preserved in the Lower Ocean Crust of the Samail Ophiolite? Evidence from Oman Drilling Project Holes GT1A and 2A

    NASA Astrophysics Data System (ADS)

    Teagle, D. A. H.; Harris, M.; Crispini, L.; Deans, J. R.; Cooper, M. J.; Kelemen, P. B.; Alt, J.; Banerjee, N.; Shanks, W. C., III

    2017-12-01

    Anhydrite is important in mid-ocean ridge hydrothermal systems because of the high concentrations of calcium and sulfate in modern seawater and anhydrite's retrograde solubility. Because anhydrite hosts many powerful tracers of fluid-rock interactions (87Sr/86Sr, δ18O, δ34S, trace elements, fluid inclusions) it is useful for tracing the chemical evolution of hydrothermal recharge fluids and estimating time-integrated fluid fluxes. Anhydrite can form from heated seawater (>100°C), through water-rock reaction, or by mixing of seawater and hydrothermal fluids. Although abundant in active hydrothermal mounds, and predicted to form from downwelling, warming fluids during convection, anhydrite is rare in drill core from seafloor lavas, sheeted dikes and upper gabbros, with only minor amounts in ODP Holes 504B and 1256D. Because anhydrite can dissolve during weathering, its occurrence in ophiolites is unexpected. Instead, gypsum is present in Macquarie Island lavas and Miocene gypsum fills cavities within the Cretaceous Troodos ore deposits. Thus, the occurrence of numerous anhydrite veins in cores from the gabbroic lower crust of the Samail ophiolite in Oman was unanticipated. To our knowledge, anhydrite in Oman gabbros has not been previously reported. Oman Drilling Project Holes GT1A and GT2A were drilled into the Wadi Gideah section of the Wadi Tayin massif. Both recovered 400 m of continuous core from sections of layered gabbros (GT1) and the foliated-layered gabbro transition (GT2). Anhydrite is present throughout both holes, some in vein networks but more commonly as isolated 1-110 mm veins (>60 mm ave). Anhydrite is mostly the sole vein filling but can occur with greenschist minerals such as epidote, quartz, chlorite and prehnite. Anhydrite commonly exhibits prismatic and bladed textures but can also be capriciously microcrystalline. Though definitive cross cutting relationships are elusive, anhydrite veins cut across some greenschist veins. Anhydrite is

  7. NAS Grid Benchmarks: A Tool for Grid Space Exploration

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We present an approach for benchmarking services provided by computational Grids. It is based on the NAS Parallel Benchmarks (NPB) and is called NAS Grid Benchmark (NGB) in this paper. We present NGB as a data flow graph encapsulating an instance of an NPB code in each graph node, which communicates with other nodes by sending/receiving initialization data. These nodes may be mapped to the same or different Grid machines. Like NPB, NGB will specify several different classes (problem sizes). NGB also specifies the generic Grid services sufficient for running the bench-mark. The implementor has the freedom to choose any specific Grid environment. However, we describe a reference implementation in Java, and present some scenarios for using NGB.

  8. Seismo-acoustic ray model benchmarking against experimental tank data.

    PubMed

    Camargo Rodríguez, Orlando; Collis, Jon M; Simpson, Harry J; Ey, Emanuel; Schneiderwind, Joseph; Felisberto, Paulo

    2012-08-01

    Acoustic predictions of the recently developed traceo ray model, which accounts for bottom shear properties, are benchmarked against tank experimental data from the EPEE-1 and EPEE-2 (Elastic Parabolic Equation Experiment) experiments. Both experiments are representative of signal propagation in a Pekeris-like shallow-water waveguide over a non-flat isotropic elastic bottom, where significant interaction of the signal with the bottom can be expected. The benchmarks show, in particular, that the ray model can be as accurate as a parabolic approximation model benchmarked in similar conditions. The results of benchmarking are important, on one side, as a preliminary experimental validation of the model and, on the other side, demonstrates the reliability of the ray approach for seismo-acoustic applications.

  9. Simple Benchmark Specifications for Space Radiation Protection

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  10. Expression, purification, crystallization and preliminary X-ray characterization of a putative glycosyltransferase of the GT-A fold found in mycobacteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulton, Zara; Crellin, Paul K.; Brammananth, Rajini

    2008-05-28

    Glycosidic bond formation is a ubiquitous enzyme-catalysed reaction. This glycosyltransferase-mediated process is responsible for the biosynthesis of innumerable oligosaccharides and glycoconjugates and is often organism- or cell-specific. However, despite the abundance of genomic information on glycosyltransferases (GTs), there is a lack of structural data for this versatile class of enzymes. Here, the cloning, expression, purification and crystallization of an essential 329-amino-acid (34.8 kDa) putative GT of the classic GT-A fold implicated in mycobacterial cell-wall biosynthesis are reported. Crystals of MAP2569c from Mycobacterium avium subsp. paratuberculosis were grown in 1.6 M monoammonium dihydrogen phosphate and 0.1 M sodium citrate pH 5.5.more » A complete data set was collected to 1.8 {angstrom} resolution using synchrotron radiation from a crystal belonging to space group P4{sub 1}2{sub 1}2.« less

  11. IT-benchmarking of clinical workflows: concept, implementation, and evaluation.

    PubMed

    Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula

    2014-01-01

    Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project.

  12. Benchmarking the cost efficiency of community care in Australian child and adolescent mental health services: implications for future benchmarking.

    PubMed

    Furber, Gareth; Brann, Peter; Skene, Clive; Allison, Stephen

    2011-06-01

    The purpose of this study was to benchmark the cost efficiency of community care across six child and adolescent mental health services (CAMHS) drawn from different Australian states. Organizational, contact and outcome data from the National Mental Health Benchmarking Project (NMHBP) data-sets were used to calculate cost per "treatment hour" and cost per episode for the six participating organizations. We also explored the relationship between intake severity as measured by the Health of the Nations Outcome Scales for Children and Adolescents (HoNOSCA) and cost per episode. The average cost per treatment hour was $223, with cost differences across the six services ranging from a mean of $156 to $273 per treatment hour. The average cost per episode was $3349 (median $1577) and there were significant differences in the CAMHS organizational medians ranging from $388 to $7076 per episode. HoNOSCA scores explained at best 6% of the cost variance per episode. These large cost differences indicate that community CAMHS have the potential to make substantial gains in cost efficiency through collaborative benchmarking. Benchmarking forums need considerable financial and business expertise for detailed comparison of business models for service provision.

  13. Mycoplasma hyorhinis is a potential pathogen of porcine respiratory disease complex that aggravates pneumonia caused by porcine reproductive and respiratory syndrome virus.

    PubMed

    Lee, Jung-Ah; Oh, Yu-Ri; Hwang, Min-A; Lee, Joong-Bok; Park, Seung-Yong; Song, Chang-Seon; Choi, In-Soo; Lee, Sang-Won

    2016-09-01

    The porcine respiratory disease complex (PRDC) caused by numerous bacterial and viral agents has a great impact on pig industry worldwide. Although Mycoplasma hyorhinis (Mhr) has been frequently isolated from lung lesions from pigs with PRDC, the pathological importance of Mhr may have been underestimated. In this study, 383 serum samples obtained from seven herds with a history of PRDC were tested for specific antibodies to Mhr, Mycoplasma hyopneumoniae (Mhp), and porcine reproductive and respiratory syndrome virus (PRRSV). Seropositive rates of PRRSV were significantly correlated with those of Mhr (correlation coefficient, 0.862; P-value, 0.013), but not with those of Mhp (correlation coefficient, -0.555; P-value, 0.196). In vivo experiments demonstrated that pigs co-infected with Mhr and PRRSV induced more severe lung lesions than pigs infected with Mhr or PRRSV alone. These findings suggest that Mhr is closely associated with pneumonia caused by PRRSV and provide important information on Mhr pathogenesis within PRDC. Therefore, effective PRDC control strategies should also consider the potential impact of Mhr in the pathogenesis of PRDC. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. A benchmarking method to measure dietary absorption efficiency of chemicals by fish.

    PubMed

    Xiao, Ruiyang; Adolfsson-Erici, Margaretha; Åkerman, Gun; McLachlan, Michael S; MacLeod, Matthew

    2013-12-01

    Understanding the dietary absorption efficiency of chemicals in the gastrointestinal tract of fish is important from both a scientific and a regulatory point of view. However, reported fish absorption efficiencies for well-studied chemicals are highly variable. In the present study, the authors developed and exploited an internal chemical benchmarking method that has the potential to reduce uncertainty and variability and, thus, to improve the precision of measurements of fish absorption efficiency. The authors applied the benchmarking method to measure the gross absorption efficiency for 15 chemicals with a wide range of physicochemical properties and structures. They selected 2,2',5,6'-tetrachlorobiphenyl (PCB53) and decabromodiphenyl ethane as absorbable and nonabsorbable benchmarks, respectively. Quantities of chemicals determined in fish were benchmarked to the fraction of PCB53 recovered in fish, and quantities of chemicals determined in feces were benchmarked to the fraction of decabromodiphenyl ethane recovered in feces. The performance of the benchmarking procedure was evaluated based on the recovery of the test chemicals and precision of absorption efficiency from repeated tests. Benchmarking did not improve the precision of the measurements; after benchmarking, however, the median recovery for 15 chemicals was 106%, and variability of recoveries was reduced compared with before benchmarking, suggesting that benchmarking could account for incomplete extraction of chemical in fish and incomplete collection of feces from different tests. © 2013 SETAC.

  15. INF and IAEA: A comparative analysis of verification strategy. [Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheinman, L.; Kratzer, M.

    1992-07-01

    This is the final report of a study on the relevance and possible lessons of Intermediate Range Nuclear Force (INF) verification to the International Atomic Energy Agency (IAEA) international safeguards activities.

  16. Benchmarks for Psychotherapy Efficacy in Adult Major Depression

    ERIC Educational Resources Information Center

    Minami, Takuya; Wampold, Bruce E.; Serlin, Ronald C.; Kircher, John C.; Brown, George S.

    2007-01-01

    This study estimates pretreatment-posttreatment effect size benchmarks for the treatment of major depression in adults that may be useful in evaluating psychotherapy effectiveness in clinical practice. Treatment efficacy benchmarks for major depression were derived for 3 different types of outcome measures: the Hamilton Rating Scale for Depression…

  17. [Benchmarking and other functions of ROM: back to basics].

    PubMed

    Barendregt, M

    2015-01-01

    Since 2011 outcome data in the Dutch mental health care have been collected on a national scale. This has led to confusion about the position of benchmarking in the system known as routine outcome monitoring (rom). To provide insight into the various objectives and uses of aggregated outcome data. A qualitative review was performed and the findings were analysed. Benchmarking is a strategy for finding best practices and for improving efficacy and it belongs to the domain of quality management. Benchmarking involves comparing outcome data by means of instrumentation and is relatively tolerant with regard to the validity of the data. Although benchmarking is a function of rom, it must be differentiated form other functions from rom. Clinical management, public accountability, research, payment for performance and information for patients are all functions of rom which require different ways of data feedback and which make different demands on the validity of the underlying data. Benchmarking is often wrongly regarded as being simply a synonym for 'comparing institutions'. It is, however, a method which includes many more factors; it can be used to improve quality and has a more flexible approach to the validity of outcome data and is less concerned than other rom functions about funding and the amount of information given to patients. Benchmarking can make good use of currently available outcome data.

  18. GT3X+ accelerometer placement affects the reliability of step-counts measured during running and pedal-revolution counts measured during bicycling.

    PubMed

    Gatti, Anthony A; Stratford, Paul W; Brenneman, Elora C; Maly, Monica R

    2016-01-01

    Accelerometers provide a measure of step-count. Reliability and validity of step-count and pedal-revolution count measurements by the GT3X+ accelerometer, placed at different anatomical locations, is absent in the literature. The purpose of this study was to investigate the reliability and validity of step and pedal-revolution counts produced by the GT3X+ placed at different anatomical locations during running and bicycling. Twenty-two healthy adults (14 men and 8 women) completed running and bicycling activity bouts (5 minutes each) while wearing 6 accelerometers: 2 each at the waist, thigh and shank. Accelerometer and video data were collected during activity. Excellent reliability and validity were found for measurements taken from accelerometers mounted at the waist and shank during running (Reliability: intraclass correlation (ICC) ≥ 0.99; standard error of measurement (SEM) ≤1.0 steps; Pearson ≥ 0.99) and at the thigh and shank during bicycling (Reliability: ICC ≥ 0.99; SEM ≤1.0 revolutions; Pearson ≥ 0.99). Excellent reliability was found between measurements taken at the waist and shank during running (ICC ≥ 0.98; SEM ≤1.6 steps) and between measurements taken at the thigh and shank during bicycling (ICC ≥ 0.99; SEM ≤1.0 revolutions). These data suggest that the GT3X+ can be used for measuring step-count during running and pedal-revolution count during bicycling. Only shank placement is recommended for both activities.

  19. Benchmarking with the BLASST Sessional Staff Standards Framework

    ERIC Educational Resources Information Center

    Luzia, Karina; Harvey, Marina; Parker, Nicola; McCormack, Coralie; Brown, Natalie R.

    2013-01-01

    Benchmarking as a type of knowledge-sharing around good practice within and between institutions is increasingly common in the higher education sector. More recently, benchmarking as a process that can contribute to quality enhancement has been deployed across numerous institutions with a view to systematising frameworks to assure and enhance the…

  20. Thermal Performance Benchmarking: Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) maymore » then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.« less

  1. Integration of a wave rotor to an ultra-micro gas turbine (UmuGT)

    NASA Astrophysics Data System (ADS)

    Iancu, Florin

    2005-12-01

    Wave rotor technology has shown a significant potential for performance improvement of thermodynamic cycles. The wave rotor is an unsteady flow machine that utilizes shock waves to transfer energy from a high energy fluid to a low energy fluid, increasing both the temperature and the pressure of the low energy fluid. Used initially as a high pressure stage for a gas turbine locomotive engine, the wave rotor was commercialized only as a supercharging device for internal combustion engines, but recently there is a stronger research effort on implementing wave rotors as topping units or pressure gain combustors for gas turbines. At the same time, Ultra Micro Gas Turbines (UmuGT) are expected to be a next generation of power source for applications from propulsion to power generation, from aerospace industry to electronic industry. Starting in 1995, with the MIT "Micro Gas Turbine" project, the mechanical engineering research world has explored more and more the idea of "Power MEMS". Microfabricated turbomachinery like turbines, compressors, pumps, but also electric generators, heat exchangers, internal combustion engines and rocket engines have been on the focus list of researchers for the past 10 years. The reason is simple: the output power is proportional to the mass flow rate of the working fluid through the engine, or the cross-sectional area while the mass or volume of the engine is proportional to the cube of the characteristic length, thus the power density tends to increase at small scales (Power/Mass=L -1). This is the so-called "cube square law". This work investigates the possibilities of incorporating a wave rotor to an UmuGT and discusses the advantages of wave rotor as topping units for gas turbines, especially at microscale. Based on documented wave rotor efficiencies at larger scale and subsidized by both, a gasdynamic model that includes wall friction, and a CFD model, the wave rotor compression efficiency at microfabrication scale could be estimated

  2. Comparison of Origin 2000 and Origin 3000 Using NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Turney, Raymond D.

    2001-01-01

    This report describes results of benchmark tests on the Origin 3000 system currently being installed at the NASA Ames National Advanced Supercomputing facility. This machine will ultimately contain 1024 R14K processors. The first part of the system, installed in November, 2000 and named mendel, is an Origin 3000 with 128 R12K processors. For comparison purposes, the tests were also run on lomax, an Origin 2000 with R12K processors. The BT, LU, and SP application benchmarks in the NAS Parallel Benchmark Suite and the kernel benchmark FT were chosen to determine system performance and measure the impact of changes on the machine as it evolves. Having been written to measure performance on Computational Fluid Dynamics applications, these benchmarks are assumed appropriate to represent the NAS workload. Since the NAS runs both message passing (MPI) and shared-memory, compiler directive type codes, both MPI and OpenMP versions of the benchmarks were used. The MPI versions used were the latest official release of the NAS Parallel Benchmarks, version 2.3. The OpenMP versiqns used were PBN3b2, a beta version that is in the process of being released. NPB 2.3 and PBN 3b2 are technically different benchmarks, and NPB results are not directly comparable to PBN results.

  3. Oman Drilling Project GT3 site survey: dynamics at the roof of an oceanic magma chamber

    NASA Astrophysics Data System (ADS)

    France, L.; Nicollet, C.; Debret, B.; Lombard, M.; Berthod, C.; Ildefonse, B.; Koepke, J.

    2017-12-01

    Oman Drilling Project (OmanDP) aims at bringing new constraints on oceanic crust accretion and evolution by drilling Holes in the whole ophiolite section (mantle and crust). Among those, operations at GT3 in the Sumail massif drilled 400 m to sample the dike - gabbro transition that corresponds to the top (gabbros) and roof (dikes) of the axial magma chamber, an interface where hydrothermal and magmatic system interacts. Previous studies based on oceanic crust formed at present day fast-spreading ridges and preserved in ophiolites have highlighted that this interface is a dynamic horizon where the axial melt lens that top the main magma chamber can intrude, reheat, and partially assimilate previously hydrothermally altered roof rocks. Here we present the preliminary results obtained in GT3 area that have allowed the community to choose the drilling site. We provide a geological and structural map of the area, together with new petrographic and chemical constraints on the dynamics of the dike - gabbro transition. Our new results allow us to quantify the dynamic processes, and to propose that 1/ the intrusive contact of the varitextured gabbro within the dikes highlights the intrusion of the melt lens top in the dike rooting zone, 2/ both dikes and previously crystallized gabbros are reheated, and recrystallized by underlying melt lens dynamics (up to 1050°C, largely above the hydrous solidus temperature of altered dikes and gabbros), 3/ the reheating range can be > 200°C, 4/ the melt lens depth variations for a given ridge position is > 200m, 5/ the reheating stage and associated recrystallization within the dikes occurred under hydrous conditions, 6/ the reheating stage is recorded at the root zone of the sheeted dike complex by one of the highest stable conductive thermal gradient ever recorded on Earth ( 3°C/m), 7/ local chemical variations in recrystallized dikes and gabbros are highlighted and used to quantify crystallization and anatectic processes, and the

  4. OWL2 benchmarking for the evaluation of knowledge based systems.

    PubMed

    Khan, Sher Afgun; Qadir, Muhammad Abdul; Abbas, Muhammad Azeem; Afzal, Muhammad Tanvir

    2017-01-01

    OWL2 semantics are becoming increasingly popular for the real domain applications like Gene engineering and health MIS. The present work identifies the research gap that negligible attention has been paid to the performance evaluation of Knowledge Base Systems (KBS) using OWL2 semantics. To fulfil this identified research gap, an OWL2 benchmark for the evaluation of KBS is proposed. The proposed benchmark addresses the foundational blocks of an ontology benchmark i.e. data schema, workload and performance metrics. The proposed benchmark is tested on memory based, file based, relational database and graph based KBS for performance and scalability measures. The results show that the proposed benchmark is able to evaluate the behaviour of different state of the art KBS on OWL2 semantics. On the basis of the results, the end users (i.e. domain expert) would be able to select a suitable KBS appropriate for his domain.

  5. Benchmarking and beyond. Information trends in home care.

    PubMed

    Twiss, Amanda; Rooney, Heather; Lang, Christine

    2002-11-01

    With today's benchmarking concepts and tools, agencies have the unprecedented opportunity to use information as a strategic advantage. Because agencies are demanding more and better information, benchmark functionality has grown increasingly sophisticated. Agencies now require a new type of analysis, focused on high-level executive summaries while reducing the current "data overload."

  6. Benchmarking strategies for measuring the quality of healthcare: problems and prospects.

    PubMed

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.

  7. Colletotrichine B, a new sesquiterpenoid from Colletotrichum gloeosporioides GT-7, a fungal endophyte of Uncaria rhynchophylla.

    PubMed

    Chen, Xiao-Wei; Yang, Zhong-Duo; Li, Xiao-Fei; Sun, Jian-Hui; Yang, Li-Jun; Zhang, Xin-Guo

    2018-02-08

    One new compound, colletotrichine B (1), was produced by the fungal Colletotrichum gloeosporioides GT-7. The structure of 1 was elucidated on the basis of spectroscopic analysis and X-ray crystallographic analysis. Monoamine oxidase (MAO), acetylcholinesterase (AChE) and phosphoinositide 3-kinase (PI3Kα) inhibitory activity of 1 was also evaluated. Compound 1 showed only AChE inhibiting activity with IC 50 value of 38.0 ± 2.67 μg/mL.

  8. Benchmark Dataset for Whole Genome Sequence Compression.

    PubMed

    C L, Biji; S Nair, Achuthsankar

    2017-01-01

    The research in DNA data compression lacks a standard dataset to test out compression tools specific to DNA. This paper argues that the current state of achievement in DNA compression is unable to be benchmarked in the absence of such scientifically compiled whole genome sequence dataset and proposes a benchmark dataset using multistage sampling procedure. Considering the genome sequence of organisms available in the National Centre for Biotechnology and Information (NCBI) as the universe, the proposed dataset selects 1,105 prokaryotes, 200 plasmids, 164 viruses, and 65 eukaryotes. This paper reports the results of using three established tools on the newly compiled dataset and show that their strength and weakness are evident only with a comparison based on the scientifically compiled benchmark dataset. The sample dataset and the respective links are available @ https://sourceforge.net/projects/benchmarkdnacompressiondataset/.

  9. Scalable randomized benchmarking of non-Clifford gates

    NASA Astrophysics Data System (ADS)

    Cross, Andrew; Magesan, Easwar; Bishop, Lev; Smolin, John; Gambetta, Jay

    Randomized benchmarking is a widely used experimental technique to characterize the average error of quantum operations. Benchmarking procedures that scale to enable characterization of n-qubit circuits rely on efficient procedures for manipulating those circuits and, as such, have been limited to subgroups of the Clifford group. However, universal quantum computers require additional, non-Clifford gates to approximate arbitrary unitary transformations. We define a scalable randomized benchmarking procedure over n-qubit unitary matrices that correspond to protected non-Clifford gates for a class of stabilizer codes. We present efficient methods for representing and composing group elements, sampling them uniformly, and synthesizing corresponding poly (n) -sized circuits. The procedure provides experimental access to two independent parameters that together characterize the average gate fidelity of a group element. We acknowledge support from ARO under Contract W911NF-14-1-0124.

  10. The impact of the International Atomic Energy Agency (IAEA) program on radiation and tissue banking in Argentina.

    PubMed

    Kairiyama, Eulogia; Morales Pedraza, Jorge

    2009-05-01

    Tissue banking activities in Argentina started in 1993. The regulatory and controlling national authority on organ, tissue and cells for transplantation activity is the National Unique Coordinating Central Institute for Ablation and Implant (INCUCAI). Three tissue banks were established under the IAEA program and nine other banks participated actively in the implementation of this program. As result of the implementation of the IAEA program in Argentina and the work done by the established tissue banks, more and more hospitals are now using, in a routine manner, radiation sterilised tissues processed by these banks. During the period 1992-2005, more than 21 016 tissues were produced and irradiated in the tissue banks participating in the IAEA program. Within the framework of the training component of the IAEA program, Argentina has been selected to host the Regional Training Centre for Latin American. In this centre, tissue bank operators and medical personal from Latin American countries were trained. Since 1999, Argentina has organised four regular regional training courses and two virtual regional training courses. More than twenty (20) tissue bank operators and medical personnel from Argentina were trained under the IAEA program in the six courses organised in the country. In general, ninety (96) tissue bank operators and medical personnel from eight Latin-American countries were trained in the Buenos Aires regional training centre. From Argentina 16 students graduated in these courses.

  11. Benchmarking: A Method for Continuous Quality Improvement in Health

    PubMed Central

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-01-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  12. Benchmarking: a method for continuous quality improvement in health.

    PubMed

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  13. Implementation of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  14. Data Race Benchmark Collection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Chunhua; Lin, Pei-Hung; Asplund, Joshua

    2017-03-21

    This project is a benchmark suite of Open-MP parallel codes that have been checked for data races. The programs are marked to show which do and do not have races. This allows them to be leveraged while testing and developing race detection tools.

  15. The Medical Library Association Benchmarking Network: development and implementation.

    PubMed

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C; Smith, Bernie Todd

    2006-04-01

    This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries. The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described. The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004. The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program.

  16. Training in Tbilisi nuclear facility provides new sampling perspectives for IAEA inspectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brim, Cornelia P.

    2016-06-08

    Office of Nonproliferation and Arms Control- (NPAC-) sponsored training in a “cold” nuclear facility in Tbilisi, Georgia provides International Atomic Energy Agency (IAEA) inspectors with a new perspective on environmental sampling strategies. Sponsored by the Nuclear Safeguards program under the NPAC, Pacific Northwest National Laboratory (PNNL) experts have been conducting an annual weeklong class for IAEA inspectors in a closed nuclear facility since 2011. The Andronikashvili Institute of Physics and the Republic of Georgia collaborate with PNNL to provide the training, and the U.S. Department of State, the U.S. Embassy in Tbilisi and the U.S. Mission to International Organizations inmore » Vienna provide logistical support.« less

  17. Implementation and verification of global optimization benchmark problems

    NASA Astrophysics Data System (ADS)

    Posypkin, Mikhail; Usov, Alexander

    2017-12-01

    The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  18. Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.

  19. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  20. IAEA activities on atomic, molecular and plasma-material interaction data for fusion

    NASA Astrophysics Data System (ADS)

    Braams, Bastiaan J.; Chung, Hyun-Kyung

    2013-09-01

    The IAEA Atomic and Molecular Data Unit (http://www-amdis.iaea.org/) aims to provide internationally evaluated and recommended data for atomic, molecular and plasma-material interaction (A+M+PMI) processes in fusion research. The Unit organizes technical meetings and coordinates an A+M Data Centre Network (DCN) and a Code Centre Network (CCN). In addition the Unit organizes Coordinated Research Projects (CRPs), for which the objectives are mixed between development of new data and evaluation and recommendation of existing data. In the area of A+M data we are placing new emphasis in our meeting schedule on data evaluation and especially on uncertainties in calculated cross section data and the propagation of uncertainties through structure data and fundamental cross sections to effective rate coefficients. Following a recent meeting of the CCN it is intended to use electron scattering on Be, Ne and N2 as exemplars for study of uncertainties and uncertainty propagation in calculated data; this will be discussed further at the presentation. Please see http://www-amdis.iaea.org/CRP/ for more on our active and planned CRPs, which are concerned with atomic processes in core and edge plasma and with plasma interaction with beryllium-based surfaces and with irradiated tungsten.

  1. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    USGS Publications Warehouse

    Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.

    2016-01-01

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical

  2. Toward the improvement in fetal monitoring during labor with the inclusion of maternal heart rate analysis.

    PubMed

    Gonçalves, Hernâni; Pinto, Paula; Silva, Manuela; Ayres-de-Campos, Diogo; Bernardes, João

    2016-04-01

    Fetal heart rate (FHR) monitoring is used routinely in labor, but conventional methods have a limited capacity to detect fetal hypoxia/acidosis. An exploratory study was performed on the simultaneous assessment of maternal heart rate (MHR) and FHR variability, to evaluate their evolution during labor and their capacity to detect newborn acidemia. MHR and FHR were simultaneously recorded in 51 singleton term pregnancies during the last two hours of labor and compared with newborn umbilical artery blood (UAB) pH. Linear/nonlinear indices were computed separately for MHR and FHR. Interaction between MHR and FHR was quantified through the same indices on FHR-MHR and through their correlation and cross-entropy. Univariate and bivariate statistical analysis included nonparametric confidence intervals and statistical tests, receiver operating characteristic curves and linear discriminant analysis. Progression of labor was associated with a significant increase in most MHR and FHR linear indices, whereas entropy indices decreased. FHR alone and in combination with MHR as FHR-MHR evidenced the highest auROC values for prediction of fetal acidemia, with 0.76 and 0.88 for the UAB pH thresholds 7.20 and 7.15, respectively. The inclusion of MHR on bivariate analysis achieved sensitivity and specificity values of nearly 100 and 89.1%, respectively. These results suggest that simultaneous analysis of MHR and FHR may improve the identification of fetal acidemia compared with FHR alone, namely during the last hour of labor.

  3. Benchmarking in Education: Tech Prep, a Case in Point. IEE Brief Number 8.

    ERIC Educational Resources Information Center

    Inger, Morton

    Benchmarking is a process by which organizations compare their practices, processes, and outcomes to standards of excellence in a systematic way. The benchmarking process entails the following essential steps: determining what to benchmark and establishing internal baseline data; identifying the benchmark; determining how that standard has been…

  4. Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects

    PubMed Central

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140

  5. MIPS bacterial genomes functional annotation benchmark dataset.

    PubMed

    Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen

    2005-05-15

    Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab

  6. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate... planning services and supplies and other appropriate preventive services, as designated by the Secretary... State for purposes of comparison in establishing the aggregate actuarial value of the benchmark...

  7. Benchmarking in pathology: development of a benchmarking complexity unit and associated key performance indicators.

    PubMed

    Neil, Amanda; Pfeffer, Sally; Burnett, Leslie

    2013-01-01

    This paper details the development of a new type of pathology laboratory productivity unit, the benchmarking complexity unit (BCU). The BCU provides a comparative index of laboratory efficiency, regardless of test mix. It also enables estimation of a measure of how much complex pathology a laboratory performs, and the identification of peer organisations for the purposes of comparison and benchmarking. The BCU is based on the theory that wage rates reflect productivity at the margin. A weighting factor for the ratio of medical to technical staff time was dynamically calculated based on actual participant site data. Given this weighting, a complexity value for each test, at each site, was calculated. The median complexity value (number of BCUs) for that test across all participating sites was taken as its complexity value for the Benchmarking in Pathology Program. The BCU allowed implementation of an unbiased comparison unit and test listing that was found to be a robust indicator of the relative complexity for each test. Employing the BCU data, a number of Key Performance Indicators (KPIs) were developed, including three that address comparative organisational complexity, analytical depth and performance efficiency, respectively. Peer groups were also established using the BCU combined with simple organisational and environmental metrics. The BCU has enabled productivity statistics to be compared between organisations. The BCU corrects for differences in test mix and workload complexity of different organisations and also allows for objective stratification into peer groups.

  8. Test One to Test Many: A Unified Approach to Quantum Benchmarks

    NASA Astrophysics Data System (ADS)

    Bai, Ge; Chiribella, Giulio

    2018-04-01

    Quantum benchmarks are routinely used to validate the experimental demonstration of quantum information protocols. Many relevant protocols, however, involve an infinite set of input states, of which only a finite subset can be used to test the quality of the implementation. This is a problem, because the benchmark for the finitely many states used in the test can be higher than the original benchmark calculated for infinitely many states. This situation arises in the teleportation and storage of coherent states, for which the benchmark of 50% fidelity is commonly used in experiments, although finite sets of coherent states normally lead to higher benchmarks. Here, we show that the average fidelity over all coherent states can be indirectly probed with a single setup, requiring only two-mode squeezing, a 50-50 beam splitter, and homodyne detection. Our setup enables a rigorous experimental validation of quantum teleportation, storage, amplification, attenuation, and purification of noisy coherent states. More generally, we prove that every quantum benchmark can be tested by preparing a single entangled state and measuring a single observable.

  9. Trip report on IAEA Training Workshop on Implementation of Integrated Management Systems for Research Reactors (T3-TR-45496).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratt, Richard J.

    2013-11-01

    From 17-21 June 2013, Sandia National Laboratories, Technical Area-V (SNL TA-V) represented the United States Department of Energy/National Nuclear Security Administration (DOE/NNSA) at the International Atomic Energy Agency (IAEA) Training Workshop (T3-TR-45486). This report gives a breakdown of the IAEA regulatory structure for those unfamiliar, and the lessons learned and observations that apply to SNL TA-V that were obtained from the workshop. The Safety Report Series, IAEA workshop final report, and SNL TA-V presentation are included as attachments.

  10. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    ERIC Educational Resources Information Center

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  11. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 11 2012-01-01 2012-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the most...

  12. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 11 2014-01-01 2014-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the most...

  13. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the most...

  14. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 11 2011-01-01 2011-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the most...

  15. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 11 2013-01-01 2013-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the most...

  16. Nomenclatural Benchmarking: The roles of digital typification and telemicroscopy

    USDA-ARS?s Scientific Manuscript database

    The process of nomenclatural benchmarking is the examination of type specimens of all available names to ascertain which currently accepted species the specimen bearing the name falls within. We propose a strategy for addressing four challenges for nomenclatural benchmarking. First, there is the mat...

  17. International land Model Benchmarking (ILAMB) Package v002.00

    DOE Data Explorer

    Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory

    2016-05-09

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  18. International land Model Benchmarking (ILAMB) Package v001.00

    DOE Data Explorer

    Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory

    2016-05-02

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  19. The Medical Library Association Benchmarking Network: development and implementation*

    PubMed Central

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd

    2006-01-01

    Objective: This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries. Methods: The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described. Results: The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004. Conclusions: The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program. PMID:16636702

  20. Modification and benchmarking of MCNP for low-energy tungsten spectra.

    PubMed

    Mercier, J R; Kopp, D T; McDavid, W D; Dove, S B; Lancaster, J L; Tucker, D M

    2000-12-01

    The MCNP Monte Carlo radiation transport code was modified for diagnostic medical physics applications. In particular, the modified code was thoroughly benchmarked for the production of polychromatic tungsten x-ray spectra in the 30-150 kV range. Validating the modified code for coupled electron-photon transport with benchmark spectra was supplemented with independent electron-only and photon-only transport benchmarks. Major revisions to the code included the proper treatment of characteristic K x-ray production and scoring, new impact ionization cross sections, and new bremsstrahlung cross sections. Minor revisions included updated photon cross sections, electron-electron bremsstrahlung production, and K x-ray yield. The modified MCNP code is benchmarked to electron backscatter factors, x-ray spectra production, and primary and scatter photon transport.

  1. The General Concept of Benchmarking and Its Application in Higher Education in Europe

    ERIC Educational Resources Information Center

    Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna

    2009-01-01

    The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…

  2. Benchmarking neuromorphic vision: lessons learnt from computer vision

    PubMed Central

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision. PMID:26528120

  3. Benchmarking FEniCS for mantle convection simulations

    NASA Astrophysics Data System (ADS)

    Vynnytska, L.; Rognes, M. E.; Clark, S. R.

    2013-01-01

    This paper evaluates the usability of the FEniCS Project for mantle convection simulations by numerical comparison to three established benchmarks. The benchmark problems all concern convection processes in an incompressible fluid induced by temperature or composition variations, and cover three cases: (i) steady-state convection with depth- and temperature-dependent viscosity, (ii) time-dependent convection with constant viscosity and internal heating, and (iii) a Rayleigh-Taylor instability. These problems are modeled by the Stokes equations for the fluid and advection-diffusion equations for the temperature and composition. The FEniCS Project provides a novel platform for the automated solution of differential equations by finite element methods. In particular, it offers a significant flexibility with regard to modeling and numerical discretization choices; we have here used a discontinuous Galerkin method for the numerical solution of the advection-diffusion equations. Our numerical results are in agreement with the benchmarks, and demonstrate the applicability of both the discontinuous Galerkin method and FEniCS for such applications.

  4. A One-group, One-dimensional Transport Benchmark in Cylindrical Geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barry Ganapol; Abderrafi M. Ougouag

    A 1-D, 1-group computational benchmark in cylndrical geometry is described. This neutron transport benchmark is useful for evaluating reactor concepts that possess azimuthal symmetry such as a pebble-bed reactor.

  5. Key performance indicators to benchmark hospital information systems - a delphi study.

    PubMed

    Hübner-Bloder, G; Ammenwerth, E

    2009-01-01

    To identify the key performance indicators for hospital information systems (HIS) that can be used for HIS benchmarking. A Delphi survey with one qualitative and two quantitative rounds. Forty-four HIS experts from health care IT practice and academia participated in all three rounds. Seventy-seven performance indicators were identified and organized into eight categories: technical quality, software quality, architecture and interface quality, IT vendor quality, IT support and IT department quality, workflow support quality, IT outcome quality, and IT costs. The highest ranked indicators are related to clinical workflow support and user satisfaction. Isolated technical indicators or cost indicators were not seen as useful. The experts favored an interdisciplinary group of all the stakeholders, led by hospital management, to conduct the HIS benchmarking. They proposed benchmarking activities both in regular (annual) intervals as well as at defined events (for example after IT introduction). Most of the experts stated that in their institutions no HIS benchmarking activities are being performed at the moment. In the context of IT governance, IT benchmarking is gaining importance in the healthcare area. The found indicators reflect the view of health care IT professionals and researchers. Research is needed to further validate and operationalize key performance indicators, to provide an IT benchmarking framework, and to provide open repositories for a comparison of the HIS benchmarks of different hospitals.

  6. Association of two Common Single Nucleotide Polymorphisms (+45T/G and +276G/T) of ADIPOQ Gene with Coronary Artery Disease in Type 2 Diabetic Patients

    PubMed Central

    Mohammadzadeh, Ghorban; Ghaffari, Mohammad-Ali; Heibar, Habib; Bazyar, Mohammad

    2016-01-01

    Background: Adiponectin, an adipocyte-secreted hormone, is known to have anti-atherogenic, anti-inflammatory, and anti-diabetic properties. In the present study, the association between two common single nucleotide polymorphisms (SNPs) (+45T/G and +276G/T) of ADIOPQ gene and coronary artery disease (CAD) was assessed in the subjects with type 2 diabetes (T2DM). Methods: Genotypes of two SNPs were determined by polymerase chain reaction-restriction fragment length polymorphism in 200 subjects with T2DM (100 subjects with CAD and 100 without CAD). Results: The frequency of TT genotype of +276G/T was significantly elevated in CAD compared to controls (χ2=7.967, P=0.019). A similar difference was found in the allele frequency of +276G/T between two groups (χ2=3.895, P=0.048). The increased risk of CAD was associated with +276 TT genotype when compared to reference GG genotype (OR=5.158; 95% CI=1.016-26.182, P=0.048). However, no similar difference was found in genotype and allele frequencies of SNP +45T/G between two groups. There was a CAD protective haplotype combination of +276 wild-type and +45 mutant-type allele (276G-45G) (OR=0.37, 95% CI=0.16-0.86, P=0.022) in the subject population. Conclusion: Our findings indicated that T allele of SNP +276G/T is more associated with the increased risk of CAD in subjects with T2DM. Also, a haplotype combination of +45G/+276G of these two SNPs has a protective effect on the risk of CAD. PMID:26781170

  7. A proposed benchmark problem for cargo nuclear threat monitoring

    NASA Astrophysics Data System (ADS)

    Wesley Holmes, Thomas; Calderon, Adan; Peeples, Cody R.; Gardner, Robin P.

    2011-10-01

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, [1]). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.×4 in.×16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.×16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.

  8. Validating dose rate calibration of radiotherapy photon beams through IAEA/WHO postal audit dosimetry service.

    PubMed

    Jangda, Abdul Qadir; Hussein, Sherali

    2012-05-01

    In external beam radiation therapy (EBRT), the quality assurance (QA) of the radiation beam is crucial to the accurate delivery of the prescribed dose to the patient. One of the dosimetric parameters that require monitoring is the beam output, specified as the dose rate on the central axis under reference conditions. The aim of this project was to validate dose rate calibration of megavoltage photon beams using the International Atomic Energy Agency (IAEA)/World Health Organisation (WHO) postal audit dosimetry service. Three photon beams were audited: a 6 MV beam from the low-energy linac and 6 and 18 MV beams from a dual high-energy linac. The agreement between our stated doses and the IAEA results was within 1% for the two 6 MV beams and within 2% for the 18 MV beam. The IAEA/WHO postal audit dosimetry service provides an independent verification of dose rate calibration protocol by an international facility.

  9. Technical Report: Installed Cost Benchmarks and Deployment Barriers for

    Science.gov Websites

    Cost Benchmarks and Deployment Barriers for Residential Solar Photovoltaics with Energy Storage Q1 2016 Installed Cost Benchmarks and Deployment Barriers for Residential Solar with Energy Storage Researchers from NREL published a report that provides detailed component and system-level cost breakdowns for

  10. What Are the ACT College Readiness Benchmarks? Information Brief

    ERIC Educational Resources Information Center

    ACT, Inc., 2013

    2013-01-01

    The ACT College Readiness Benchmarks are the minimum ACT® college readiness assessment scores required for students to have a high probability of success in credit-bearing college courses--English Composition, social sciences courses, College Algebra, or Biology. This report identifies the College Readiness Benchmarks on the ACT Compass scale…

  11. Apples to Oranges: Benchmarking Vocational Education and Training Programmes

    ERIC Educational Resources Information Center

    Bogetoft, Peter; Wittrup, Jesper

    2017-01-01

    This paper discusses methods for benchmarking vocational education and training colleges and presents results from a number of models. It is conceptually difficult to benchmark vocational colleges. The colleges typically offer a wide range of course programmes, and the students come from different socioeconomic backgrounds. We solve the…

  12. Effect of Beta-Blocker Therapy, Maximal Heart Rate, and Exercise Capacity During Stress Testing on Long-Term Survival (from The Henry Ford Exercise Testing Project).

    PubMed

    Hung, Rupert K; Al-Mallah, Mouaz H; Whelton, Seamus P; Michos, Erin D; Blumenthal, Roger S; Ehrman, Jonathan K; Brawner, Clinton A; Keteyian, Steven J; Blaha, Michael J

    2016-12-01

    Whether lower heart rate thresholds (defined as the percentage of age-predicted maximal heart rate achieved, or ppMHR) should be used to determine chronotropic incompetence in patients on beta-blocker therapy (BBT) remains unclear. In this retrospective cohort study, we analyzed 64,549 adults without congestive heart failure or atrial fibrillation (54 ± 13 years old, 46% women, 29% black) who underwent clinician-referred exercise stress testing at a single health care system in Detroit, Michigan from 1991 to 2009, with median follow-up of 10.6 years for all-cause mortality (interquartile range 7.7 to 14.7 years). Using Cox regression models, we assessed the effect of BBT, ppMHR, and estimated exercise capacity on mortality, with adjustment for demographic data, medical history, pertinent medications, and propensity to be on BBT. There were 9,259 deaths during follow-up. BBT was associated with an 8% lower adjusted achieved ppMHR (91% in no BBT vs 83% in BBT). ppMHR was inversely associated with all-cause mortality but with significant attenuation by BBT (per 10% ppMHR HR: no BBT: 0.80 [0.78 to 0.82] vs BBT: 0.89 [0.87 to 0.92]). Patients on BBT who achieved 65% ppMHR had a similar adjusted mortality rate as those not on BBT who achieved 85% ppMHR (p >0.05). Estimated exercise capacity further attenuated the prognostic value of ppMHR (per-10%-ppMHR HR: no BBT: 0.88 [0.86 to 0.90] vs BBT: 0.95 [0.93 to 0.98]). In conclusion, the prognostic value of ppMHR was significantly attenuated by BBT. For patients on BBT, a lower threshold of 65% ppMHR may be considered for determining worsened prognosis. Estimated exercise capacity further diminished the prognostic value of ppMHR particularly in patients on BBT. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Implementation and validation of a conceptual benchmarking framework for patient blood management.

    PubMed

    Kastner, Peter; Breznik, Nada; Gombotz, Hans; Hofmann, Axel; Schreier, Günter

    2015-01-01

    Public health authorities and healthcare professionals are obliged to ensure high quality health service. Because of the high variability of the utilisation of blood and blood components, benchmarking is indicated in transfusion medicine. Implementation and validation of a benchmarking framework for Patient Blood Management (PBM) based on the report from the second Austrian Benchmark trial. Core modules for automatic report generation have been implemented with KNIME (Konstanz Information Miner) and validated by comparing the output with the results of the second Austrian benchmark trial. Delta analysis shows a deviation <0.1% for 95% (max. 1.4%). The framework provides a reliable tool for PBM benchmarking. The next step is technical integration with hospital information systems.

  14. Benchmarking multimedia performance

    NASA Astrophysics Data System (ADS)

    Zandi, Ahmad; Sudharsanan, Subramania I.

    1998-03-01

    With the introduction of faster processors and special instruction sets tailored to multimedia, a number of exciting applications are now feasible on the desktops. Among these is the DVD playback consisting, among other things, of MPEG-2 video and Dolby digital audio or MPEG-2 audio. Other multimedia applications such as video conferencing and speech recognition are also becoming popular on computer systems. In view of this tremendous interest in multimedia, a group of major computer companies have formed, Multimedia Benchmarks Committee as part of Standard Performance Evaluation Corp. to address the performance issues of multimedia applications. The approach is multi-tiered with three tiers of fidelity from minimal to full compliant. In each case the fidelity of the bitstream reconstruction as well as quality of the video or audio output are measured and the system is classified accordingly. At the next step the performance of the system is measured. In many multimedia applications such as the DVD playback the application needs to be run at a specific rate. In this case the measurement of the excess processing power, makes all the difference. All these make a system level, application based, multimedia benchmark very challenging. Several ideas and methodologies for each aspect of the problems will be presented and analyzed.

  15. Benchmarks Momentum on Increase

    ERIC Educational Resources Information Center

    McNeil, Michele

    2008-01-01

    No longer content with the patchwork quilt of assessments used to measure states' K-12 performance, top policy groups are pushing states toward international benchmarking as a way to better prepare students for a competitive global economy. The National Governors Association, the Council of Chief State School Officers, and the standards-advocacy…

  16. Benchmarks: WICHE Region 2012

    ERIC Educational Resources Information Center

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  17. A review on the benchmarking concept in Malaysian construction safety performance

    NASA Astrophysics Data System (ADS)

    Ishak, Nurfadzillah; Azizan, Muhammad Azizi

    2018-02-01

    Construction industry is one of the major industries that propels Malaysia's economy in highly contributes to our nation's GDP growth, yet the high fatality rates on construction sites have caused concern among safety practitioners and the stakeholders. Hence, there is a need of benchmarking in performance of Malaysia's construction industry especially in terms of safety. This concept can create a fertile ground for ideas, but only in a receptive environment, organization that share good practices and compare their safety performance against other benefit most to establish improvement in safety culture. This research was conducted to study the awareness important, evaluate current practice and improvement, and also identify the constraint in implement of benchmarking on safety performance in our industry. Additionally, interviews with construction professionals were come out with different views on this concept. Comparison has been done to show the different understanding of benchmarking approach and how safety performance can be benchmarked. But, it's viewed as one mission, which to evaluate objectives identified through benchmarking that will improve the organization's safety performance. Finally, the expected result from this research is to help Malaysia's construction industry implement best practice in safety performance management through the concept of benchmarking.

  18. Effects of benchmarking on the quality of type 2 diabetes care: results of the OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study in Greece

    PubMed Central

    Tsimihodimos, Vasilis; Kostapanos, Michael S.; Moulis, Alexandros; Nikas, Nikos; Elisaf, Moses S.

    2015-01-01

    Objectives: To investigate the effect of benchmarking on the quality of type 2 diabetes (T2DM) care in Greece. Methods: The OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study [ClinicalTrials.gov identifier: NCT00681850] was an international multicenter, prospective cohort study. It included physicians randomized 3:1 to either receive benchmarking for glycated hemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) treatment targets (benchmarking group) or not (control group). The proportions of patients achieving the targets of the above-mentioned parameters were compared between groups after 12 months of treatment. Also, the proportions of patients achieving those targets at 12 months were compared with baseline in the benchmarking group. Results: In the Greek region, the OPTIMISE study included 797 adults with T2DM (570 in the benchmarking group). At month 12 the proportion of patients within the predefined targets for SBP and LDL-C was greater in the benchmarking compared with the control group (50.6 versus 35.8%, and 45.3 versus 36.1%, respectively). However, these differences were not statistically significant. No difference between groups was noted in the percentage of patients achieving the predefined target for HbA1c. At month 12 the increase in the percentage of patients achieving all three targets was greater in the benchmarking (5.9–15.0%) than in the control group (2.7–8.1%). In the benchmarking group more patients were on target regarding SBP (50.6% versus 29.8%), LDL-C (45.3% versus 31.3%) and HbA1c (63.8% versus 51.2%) at 12 months compared with baseline (p < 0.001 for all comparisons). Conclusion: Benchmarking may comprise a promising tool for improving the quality of T2DM care. Nevertheless, target achievement rates of each, and of all three, quality indicators were suboptimal, indicating there are still unmet needs in the management of T2DM. PMID:26445642

  19. Benchmarking methods and data sets for ligand enrichment assessment in virtual screening.

    PubMed

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2015-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. "analogue bias", "artificial enrichment" and "false negative". In addition, we introduce our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylases (HDACs) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The leave-one-out cross-validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased as measured by property matching, ROC curves and AUCs. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Benchmarking Methods and Data Sets for Ligand Enrichment Assessment in Virtual Screening

    PubMed Central

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2014-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. “analogue bias”, “artificial enrichment” and “false negative”. In addition, we introduced our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylase (HDAC) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The Leave-One-Out Cross-Validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased in terms of property matching, ROC curves and AUCs. PMID:25481478

  1. Characterization of Homozygous Hb Setif (HBA2: c.283G>T) in the Iranian Population.

    PubMed

    Farashi, Samaneh; Garous, Negin F; Vakili, Shadi; Ashki, Mehri; Imanian, Hashem; Azarkeivan, Azita; Najmabadi, Hossein

    2016-01-01

    Hemoglobin (Hb) variants are abnormalities resulting from point mutations in either of the two α-globin genes (HBA2 or HBA1) or the β-globin gene (HBB). Various reports of Hb variants have been described in Iran and other countries around the world. Hb Setif (or HBA2: c.283G>T) is one of these variants with a mutation at codon 94 of of the α2-globin gene that is characterized in clinically normal heterozygous individuals. We here report clinical and hematological findings in two homozygous cases of Iranian origin for this unstable Hb variant.

  2. Requirements for benchmarking personal image retrieval systems

    NASA Astrophysics Data System (ADS)

    Bouguet, Jean-Yves; Dulong, Carole; Kozintsev, Igor; Wu, Yi

    2006-01-01

    It is now common to have accumulated tens of thousands of personal ictures. Efficient access to that many pictures can only be done with a robust image retrieval system. This application is of high interest to Intel processor architects. It is highly compute intensive, and could motivate end users to upgrade their personal computers to the next generations of processors. A key question is how to assess the robustness of a personal image retrieval system. Personal image databases are very different from digital libraries that have been used by many Content Based Image Retrieval Systems.1 For example a personal image database has a lot of pictures of people, but a small set of different people typically family, relatives, and friends. Pictures are taken in a limited set of places like home, work, school, and vacation destination. The most frequent queries are searched for people, and for places. These attributes, and many others affect how a personal image retrieval system should be benchmarked, and benchmarks need to be different from existing ones based on art images, or medical images for examples. The attributes of the data set do not change the list of components needed for the benchmarking of such systems as specified in2: - data sets - query tasks - ground truth - evaluation measures - benchmarking events. This paper proposed a way to build these components to be representative of personal image databases, and of the corresponding usage models.

  3. Expression, purification, crystallization and preliminary X-ray characterization of a putative glycosyltransferase of the GT-A fold found in mycobacteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulton, Zara; Australian Research Council Centre of Excellence in Structural and Functional Microbial Genomics, Monash University, Clayton, Victoria 3800; Crellin, Paul K.

    2008-05-01

    MAP2569c from M. avium subsp. paratuberculosis, a putative glycosyltransferase implicated in mycobacterial cell-wall biosynthesis, was cloned, expressed, purified and crystallized. X-ray diffraction data were collected to 1.8 Å resolution. Glycosidic bond formation is a ubiquitous enzyme-catalysed reaction. This glycosyltransferase-mediated process is responsible for the biosynthesis of innumerable oligosaccharides and glycoconjugates and is often organism- or cell-specific. However, despite the abundance of genomic information on glycosyltransferases (GTs), there is a lack of structural data for this versatile class of enzymes. Here, the cloning, expression, purification and crystallization of an essential 329-amino-acid (34.8 kDa) putative GT of the classic GT-A fold implicatedmore » in mycobacterial cell-wall biosynthesis are reported. Crystals of MAP2569c from Mycobacterium avium subsp. paratuberculosis were grown in 1.6 M monoammonium dihydrogen phosphate and 0.1 M sodium citrate pH 5.5. A complete data set was collected to 1.8 Å resolution using synchrotron radiation from a crystal belonging to space group P4{sub 1}2{sub 1}2.« less

  4. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1996 revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Tsao, C.L.

    1996-06-01

    This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less

  5. Implementation of the NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for High Performance Computing (HPC). In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for CFD applications.

  6. Integral Full Core Multi-Physics PWR Benchmark with Measured Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forget, Benoit; Smith, Kord; Kumar, Shikhar

    In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevantmore » multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the

  7. Benchmarking of venous thromboembolism prophylaxis practice with ENT.UK guidelines.

    PubMed

    Al-Qahtani, Ali S

    2017-05-01

    The aim of this study was to benchmark our guidelines of prevention of venous thromboembolism (VTE) in ENT surgical population against ENT.UK guidelines, and also to encourage healthcare providers to utilize benchmarking as an effective method of improving performance. The study design is prospective descriptive analysis. The setting of this study is tertiary referral centre (Assir Central Hospital, Abha, Saudi Arabia). In this study, we are benchmarking our practice guidelines of the prevention of VTE in the ENT surgical population against that of ENT.UK guidelines to mitigate any gaps. ENT guidelines 2010 were downloaded from the ENT.UK Website. Our guidelines were compared with the possibilities that either our performance meets or fall short of ENT.UK guidelines. Immediate corrective actions will take place if there is quality chasm between the two guidelines. ENT.UK guidelines are evidence-based and updated which may serve as role-model for adoption and benchmarking. Our guidelines were accordingly amended to contain all factors required in providing a quality service to ENT surgical patients. While not given appropriate attention, benchmarking is a useful tool in improving quality of health care. It allows learning from others' practices and experiences, and works towards closing any quality gaps. In addition, benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning service provision. It is recommended to be included on the list of quality improvement methods of healthcare services.

  8. Issues in benchmarking human reliability analysis methods : a literature review.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted,more » reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  9. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  10. Benchmarking of Heavy Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  11. NAS Parallel Benchmark Results 11-96. 1.0

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Bailey, David; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The NAS Parallel Benchmarks have been developed at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion. In other words, the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. These results represent the best results that have been reported to us by the vendors for the specific 3 systems listed. In this report, we present new NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, SGI Origin200, and SGI Origin2000. We also report High Performance Fortran (HPF) based NPB results for IBM SP2 Wide Nodes, HP/Convex Exemplar SPP2000, and SGI/CRAY T3D. These results have been submitted by Applied Parallel Research (APR) and Portland Group Inc. (PGI). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks.

  12. Deterring Nuclear Proliferation: The Importance of IAEA Safeguards: A TEXTBOOK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenthal, M.D.; Fishbone, L.G.; Gallini, L.

    2012-03-13

    Nuclear terrorism and nuclear proliferation are among the most pressing challenges to international peace and security that we face today. Iran and Syria remain in non-compliance with the safeguards requirements of the NPT, and the nuclear ambitions of North Korea remain unchecked. Despite these challenges, the NPT remains a cornerstone of the nuclear non-proliferation regime, and the safeguards implemented by the International Atomic Energy Agency (IAEA) under the NPT play a critical role in deterring nuclear proliferation.How do they work? Where did they come from? And what is their future? This book answers these questions. Anyone studying the field ofmore » nuclear non-proliferation will benefit from reading this book, and for anyone entering the field, the book will enable them to get a running start. Part I describes the foundations of the international safeguards system: its origins in the 1930s - when new discoveries in physics made it clear immediately that nuclear energy held both peril and promise - through the entry into force in 1970 of the NPT, which codified the role of IAEA safeguards as a means to verify states NPT commitments not to acquire nuclear weapons. Part II describes the NPT safeguards system, which is based on a model safeguards agreement developed specifically for the NPT, The Structure and Content of Agreements between the Agency and States required in connection with the Treaty on the Non-Proliferation of Nuclear Weapons, which has been published by the IAEA as INFCIRC/153. Part III describes events, especially in South Africa, the DPRK, and Iraq in the early 1990s, that triggered a transformation in the way in which safeguards were conceptualized and implemented.« less

  13. Engine Benchmarking - Final CRADA Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallner, Thomas

    Detailed benchmarking of the powertrains of three light-duty vehicles was performed. Results were presented and provided to CRADA partners. The vehicles included a MY2011 Audi A4, a MY2012 Mini Cooper and a MY2014 Nissan Versa.

  14. [Benchmarking of university trauma centers in Germany. Research and teaching].

    PubMed

    Gebhard, F; Raschke, M; Ruchholtz, S; Meffert, R; Marzi, I; Pohlemann, T; Südkamp, N; Josten, C; Zwipp, H

    2011-07-01

    Benchmarking is a very popular business process and meanwhile is used in research as well. The aim of the present study is to elucidate key numbers of German university trauma departments regarding research and teaching. The data set is based upon the monthly reports given by the administration in each university. As a result the study shows that only well-known parameters such as fund-raising and impact factors can be used to benchmark university-based trauma centers. The German federal system does not allow a nationwide benchmarking.

  15. Simultaneous monitoring of maternal and fetal heart rate variability during labor in relation with fetal gender.

    PubMed

    Gonçalves, Hernâni; Fernandes, Diana; Pinto, Paula; Ayres-de-Campos, Diogo; Bernardes, João

    2017-11-01

    Male gender is considered a risk factor for several adverse perinatal outcomes. Fetal gender effect on fetal heart rate (FHR) has been subject of several studies with contradictory results. The importance of maternal heart rate (MHR) monitoring during labor has also been investigated, but less is known about the effect of fetal gender on MHR. The aim of this study is to simultaneously assess maternal and FHR variability during labor in relation with fetal gender. Simultaneous MHR and FHR recordings were obtained from 44 singleton term pregnancies during the last 2 hr of labor (H 1, H 2 ). Heart rate tracings were analyzed using linear (time- and frequency-domain) and nonlinear indices. Both linear and nonlinear components were considered in assessing FHR and MHR interaction, including cross-sample entropy (cross-SampEn). Mothers carrying male fetuses (n = 22) had significantly higher values for linear indices related with MHR average and variability and sympatho-vagal balance, while the opposite occurred in the high-frequency component and most nonlinear indices. Significant differences in FHR were only observed in H 1 with higher entropy values in female fetuses. Assessing the differences between FHR and MHR, statistically significant differences were obtained in most nonlinear indices between genders. A significantly higher cross-SampEn was observed in mothers carrying female fetuses (n = 22), denoting lower synchrony or similarity between MHR and FHR. The variability of MHR and the synchrony/similarity between MHR and FHR vary with respect to fetal gender during labor. These findings suggest that fetal gender needs to be taken into account when simultaneously monitoring MHR and FHR. © 2017 Wiley Periodicals, Inc.

  16. Benchmarking for maximum value.

    PubMed

    Baldwin, Ed

    2009-03-01

    Speaking at the most recent Healthcare Estates conference, Ed Baldwin, of international built asset consultancy EC Harris LLP, examined the role of benchmarking and market-testing--two of the key methods used to evaluate the quality and cost-effectiveness of hard and soft FM services provided under PFI healthcare schemes to ensure they are offering maximum value for money.

  17. MARC calculations for the second WIPP structural benchmark problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, H.S.

    1981-05-01

    This report describes calculations made with the MARC structural finite element code for the second WIPP structural benchmark problem. Specific aspects of problem implementation such as element choice, slip line modeling, creep law implementation, and thermal-mechanical coupling are discussed in detail. Also included are the computational results specified in the benchmark problem formulation.

  18. Local implementation of the Essence of Care benchmarks.

    PubMed

    Jones, Sue

    To understand clinical practice benchmarking from the perspective of nurses working in a large acute NHS trust and to determine whether the nurses perceived that their commitment to Essence of Care led to improvements in care, the factors that influenced their role in the process and the organisational factors that influenced benchmarking. An ethnographic case study approach was adopted. Six themes emerged from the data. Two organisational issues emerged: leadership and the values and/or culture of the organisation. The findings suggested that the leadership ability of the Essence of Care link nurses and the value placed on this work by the organisation were key to the success of benchmarking. A model for successful implementation of the Essence of Care is proposed based on the findings of this study, which lends itself to testing by other organisations.

  19. Radionuclide transfer to fruit in the IAEA TRS No. 472

    NASA Astrophysics Data System (ADS)

    Carini, F.; Pellizzoni, M.; Giosuè, S.

    2012-04-01

    This paper describes the approach taken to present the information on fruits in the IAEA report TRS No. 472, supported by the IAEA-TECDOC-1616, which describes the key transfer processes, concepts and conceptual models regarded as important for dose assessment, as well as relevant parameters for modelling radionuclide transfer in fruits. Information relate to fruit plants grown in agricultural ecosystems of temperate regions. The relative significance of each pathway after release of radionuclides depends upon the radionuclide, the kind of crop, the stage of plant development and the season at time of deposition. Fruit intended as a component of the human diet is borne by plants that are heterogeneous in habits, and morphological and physiological traits. Information on radionuclides in fruit systems has therefore been rationalised by characterising plants in three groups: woody trees, shrubs, and herbaceous plants. Parameter values have been collected from open literature, conference proceedings, institutional reports, books and international databases. Data on root uptake are reported as transfer factor values related to fresh weight, being consumption data for fruits usually given in fresh weight.

  20. Benchmarking for On-Scalp MEG Sensors.

    PubMed

    Xie, Minshu; Schneiderman, Justin F; Chukharkin, Maxim L; Kalabukhov, Alexei; Riaz, Bushra; Lundqvist, Daniel; Whitmarsh, Stephen; Hamalainen, Matti; Jousmaki, Veikko; Oostenveld, Robert; Winkler, Dag

    2017-06-01

    We present a benchmarking protocol for quantitatively comparing emerging on-scalp magnetoencephalography (MEG) sensor technologies to their counterparts in state-of-the-art MEG systems. As a means of validation, we compare a high-critical-temperature superconducting quantum interference device (high T c SQUID) with the low- T c SQUIDs of an Elekta Neuromag TRIUX system in MEG recordings of auditory and somatosensory evoked fields (SEFs) on one human subject. We measure the expected signal gain for the auditory-evoked fields (deeper sources) and notice some unfamiliar features in the on-scalp sensor-based recordings of SEFs (shallower sources). The experimental results serve as a proof of principle for the benchmarking protocol. This approach is straightforward, general to various on-scalp MEG sensors, and convenient to use on human subjects. The unexpected features in the SEFs suggest on-scalp MEG sensors may reveal information about neuromagnetic sources that is otherwise difficult to extract from state-of-the-art MEG recordings. As the first systematically established on-scalp MEG benchmarking protocol, magnetic sensor developers can employ this method to prove the utility of their technology in MEG recordings. Further exploration of the SEFs with on-scalp MEG sensors may reveal unique information about their sources.

  1. I-NERI Quarterly Technical Report (April 1 to June 30, 2005)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang Oh; Prof. Hee Cheon NO; Prof. John Lee

    2005-06-01

    The objective of this Korean/United States/laboratory/university collaboration is to develop new advanced computational methods for safety analysis codes for very-high-temperature gas-cooled reactors (VHTGRs) and numerical and experimental validation of these computer codes. This study consists of five tasks for FY-03: (1) development of computational methods for the VHTGR, (2) theoretical modification of aforementioned computer codes for molecular diffusion (RELAP5/ATHENA) and modeling CO and CO2 equilibrium (MELCOR), (3) development of a state-of-the-art methodology for VHTGR neutronic analysis and calculation of accurate power distributions and decay heat deposition rates, (4) reactor cavity cooling system experiment, and (5) graphite oxidation experiment. Second quartermore » of Year 3: (A) Prof. NO and Kim continued Task 1. As a further plant application of GAMMA code, we conducted two analyses: IAEA GT-MHR benchmark calculation for LPCC and air ingress analysis for PMR 600MWt. The GAMMA code shows comparable peak fuel temperature trend to those of other country codes. The analysis results for air ingress show much different trend from that of previous PBR analysis: later onset of natural circulation and less significant rise in graphite temperature. (B) Prof. Park continued Task 2. We have designed new separate effect test device having same heat transfer area and different diameter and total number of U-bands of air cooling pipe. New design has smaller pressure drop in the air cooling pipe than the previous one as designed with larger diameter and less number of U-bands. With the device, additional experiments have been performed to obtain temperature distributions of the water tank, the surface and the center of cooling pipe on axis. The results will be used to optimize the design of SNU-RCCS. (C) Prof. NO continued Task 3. The experimental work of air ingress is going on without any concern: With nuclear graphite IG-110, various kinetic parameters and

  2. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Terrestrial Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II

    1993-01-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a setmore » of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.« less

  3. Toward benchmarking in catalysis science: Best practices, challenges, and opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bligaard, Thomas; Bullock, R. Morris; Campbell, Charles T.

    Benchmarking is a community-based and (preferably) community-driven activity involving consensus-based decisions on how to make reproducible, fair, and relevant assessments. In catalysis science, important catalyst performance metrics include activity, selectivity, and the deactivation profile, which enable comparisons between new and standard catalysts. Benchmarking also requires careful documentation, archiving, and sharing of methods and measurements, to ensure that the full value of research data can be realized. Beyond these goals, benchmarking presents unique opportunities to advance and accelerate understanding of complex reaction systems by combining and comparing experimental information from multiple, in situ and operando techniques with theoretical insights derived frommore » calculations characterizing model systems. This Perspective describes the origins and uses of benchmarking and its applications in computational catalysis, heterogeneous catalysis, molecular catalysis, and electrocatalysis. As a result, it also discusses opportunities and challenges for future developments in these fields.« less

  4. Toward benchmarking in catalysis science: Best practices, challenges, and opportunities

    DOE PAGES

    Bligaard, Thomas; Bullock, R. Morris; Campbell, Charles T.; ...

    2016-03-07

    Benchmarking is a community-based and (preferably) community-driven activity involving consensus-based decisions on how to make reproducible, fair, and relevant assessments. In catalysis science, important catalyst performance metrics include activity, selectivity, and the deactivation profile, which enable comparisons between new and standard catalysts. Benchmarking also requires careful documentation, archiving, and sharing of methods and measurements, to ensure that the full value of research data can be realized. Beyond these goals, benchmarking presents unique opportunities to advance and accelerate understanding of complex reaction systems by combining and comparing experimental information from multiple, in situ and operando techniques with theoretical insights derived frommore » calculations characterizing model systems. This Perspective describes the origins and uses of benchmarking and its applications in computational catalysis, heterogeneous catalysis, molecular catalysis, and electrocatalysis. As a result, it also discusses opportunities and challenges for future developments in these fields.« less

  5. NOTE: Monte Carlo simulation of correction factors for IAEA TLD holders

    NASA Astrophysics Data System (ADS)

    Hultqvist, Martha; Fernández-Varea, José M.; Izewska, Joanna

    2010-03-01

    The IAEA standard thermoluminescent dosimeter (TLD) holder has been developed for the IAEA/WHO TLD postal dose program for audits of high-energy photon beams, and it is also employed by the ESTRO-QUALity assurance network (EQUAL) and several national TLD audit networks. Factors correcting for the influence of the holder on the TL signal under reference conditions have been calculated in the present work from Monte Carlo simulations with the PENELOPE code for 60Co γ-rays and 4, 6, 10, 15, 18 and 25 MV photon beams. The simulation results are around 0.2% smaller than measured factors reported in the literature, but well within the combined standard uncertainties. The present study supports the use of the experimentally obtained holder correction factors in the determination of the absorbed dose to water from the TL readings; the factors calculated by means of Monte Carlo simulations may be adopted for the cases where there are no measured data.

  6. Benchmarking routine psychological services: a discussion of challenges and methods.

    PubMed

    Delgadillo, Jaime; McMillan, Dean; Leach, Chris; Lucock, Mike; Gilbody, Simon; Wood, Nick

    2014-01-01

    Policy developments in recent years have led to important changes in the level of access to evidence-based psychological treatments. Several methods have been used to investigate the effectiveness of these treatments in routine care, with different approaches to outcome definition and data analysis. To present a review of challenges and methods for the evaluation of evidence-based treatments delivered in routine mental healthcare. This is followed by a case example of a benchmarking method applied in primary care. High, average and poor performance benchmarks were calculated through a meta-analysis of published data from services working under the Improving Access to Psychological Therapies (IAPT) Programme in England. Pre-post treatment effect sizes (ES) and confidence intervals were estimated to illustrate a benchmarking method enabling services to evaluate routine clinical outcomes. High, average and poor performance ES for routine IAPT services were estimated to be 0.91, 0.73 and 0.46 for depression (using PHQ-9) and 1.02, 0.78 and 0.52 for anxiety (using GAD-7). Data from one specific IAPT service exemplify how to evaluate and contextualize routine clinical performance against these benchmarks. The main contribution of this report is to summarize key recommendations for the selection of an adequate set of psychometric measures, the operational definition of outcomes, and the statistical evaluation of clinical performance. A benchmarking method is also presented, which may enable a robust evaluation of clinical performance against national benchmarks. Some limitations concerned significant heterogeneity among data sources, and wide variations in ES and data completeness.

  7. Gaia FGK benchmark stars: Metallicity

    NASA Astrophysics Data System (ADS)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  8. Benchmarking Ada tasking on tightly coupled multiprocessor architectures

    NASA Technical Reports Server (NTRS)

    Collard, Philippe; Goforth, Andre; Marquardt, Matthew

    1989-01-01

    The development of benchmarks and performance measures for parallel Ada tasking is reported with emphasis on the macroscopic behavior of the benchmark across a set of load parameters. The application chosen for the study was the NASREM model for telerobot control, relevant to many NASA missions. The results of the study demonstrate the potential of parallel Ada in accomplishing the task of developing a control system for a system such as the Flight Telerobotic Servicer using the NASREM framework.

  9. Marking Closely or on the Bench?: An Australian's Benchmark Statement.

    ERIC Educational Resources Information Center

    Jones, Roy

    2000-01-01

    Reviews the benchmark statements of the Quality Assurance Agency for Higher Education in the United Kingdom. Examines the various sections within the benchmark. States that in terms of emphasizing the positive attributes of the geography discipline the statements have wide utility and applicability. (CMK)

  10. 40 CFR 141.543 - How is the disinfection benchmark calculated?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Enhanced Filtration and Disinfection-Systems Serving Fewer Than 10,000 People Disinfection Benchmark § 141.543 How is the disinfection... 40 Protection of Environment 22 2010-07-01 2010-07-01 false How is the disinfection benchmark...

  11. 40 CFR 141.709 - Developing the disinfection profile and benchmark.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Developing the disinfection profile... Cryptosporidium Disinfection Profiling and Benchmarking Requirements § 141.709 Developing the disinfection profile and benchmark. (a) Systems required to develop disinfection profiles under § 141.708 must follow the...

  12. 40 CFR 141.709 - Developing the disinfection profile and benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Developing the disinfection profile... Cryptosporidium Disinfection Profiling and Benchmarking Requirements § 141.709 Developing the disinfection profile and benchmark. (a) Systems required to develop disinfection profiles under § 141.708 must follow the...

  13. 40 CFR 141.709 - Developing the disinfection profile and benchmark.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Developing the disinfection profile... Cryptosporidium Disinfection Profiling and Benchmarking Requirements § 141.709 Developing the disinfection profile and benchmark. (a) Systems required to develop disinfection profiles under § 141.708 must follow the...

  14. 40 CFR 141.709 - Developing the disinfection profile and benchmark.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Developing the disinfection profile... Cryptosporidium Disinfection Profiling and Benchmarking Requirements § 141.709 Developing the disinfection profile and benchmark. (a) Systems required to develop disinfection profiles under § 141.708 must follow the...

  15. Benchmarking of HEU Mental Annuli Critical Assemblies with Internally Reflected Graphite Cylinder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiaobo, Liu; Bess, John D.; Marshall, Margaret A.

    Three experimental configurations of critical assemblies, performed in 1963 at the Oak Ridge Critical Experiment Facility, which are assembled using three different diameter HEU annuli (15-9 inches, 15-7 inches and 13-7 inches) metal annuli with internally reflected graphite cylinder are evaluated and benchmarked. The experimental uncertainties which are 0.00055, 0.00055 and 0.00055 respectively, and biases to the detailed benchmark models which are -0.00179, -0.00189 and -0.00114 respectively, were determined, and the experimental benchmark keff results were obtained for both detailed and simplified model. The calculation results for both detailed and simplified models using MCNP6-1.0 and ENDF VII.1 agree well tomore » the benchmark experimental results with a difference of less than 0.2%. These are acceptable benchmark experiments for inclusion in the ICSBEP Handbook.« less

  16. Rethinking the reference collection: exploring benchmarks and e-book availability.

    PubMed

    Husted, Jeffrey T; Czechowski, Leslie J

    2012-01-01

    Librarians in the Health Sciences Library System at the University of Pittsburgh explored the possibility of developing an electronic reference collection that would replace the print reference collection, thus providing access to these valuable materials to a widely dispersed user population. The librarians evaluated the print reference collection and standard collection development lists as potential benchmarks for the electronic collection, and they determined which books were available in electronic format. They decided that the low availability of electronic versions of titles in each benchmark group rendered the creation of an electronic reference collection using either benchmark impractical.

  17. A comparison of five benchmarks

    NASA Technical Reports Server (NTRS)

    Huss, Janice E.; Pennline, James A.

    1987-01-01

    Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.

  18. An automated protocol for performance benchmarking a widefield fluorescence microscope.

    PubMed

    Halter, Michael; Bier, Elianna; DeRose, Paul C; Cooksey, Gregory A; Choquette, Steven J; Plant, Anne L; Elliott, John T

    2014-11-01

    Widefield fluorescence microscopy is a highly used tool for visually assessing biological samples and for quantifying cell responses. Despite its widespread use in high content analysis and other imaging applications, few published methods exist for evaluating and benchmarking the analytical performance of a microscope. Easy-to-use benchmarking methods would facilitate the use of fluorescence imaging as a quantitative analytical tool in research applications, and would aid the determination of instrumental method validation for commercial product development applications. We describe and evaluate an automated method to characterize a fluorescence imaging system's performance by benchmarking the detection threshold, saturation, and linear dynamic range to a reference material. The benchmarking procedure is demonstrated using two different materials as the reference material, uranyl-ion-doped glass and Schott 475 GG filter glass. Both are suitable candidate reference materials that are homogeneously fluorescent and highly photostable, and the Schott 475 GG filter glass is currently commercially available. In addition to benchmarking the analytical performance, we also demonstrate that the reference materials provide for accurate day to day intensity calibration. Published 2014 Wiley Periodicals Inc. Published 2014 Wiley Periodicals Inc. This article is a US government work and, as such, is in the public domain in the United States of America.

  19. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1994 Revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Mabrey, J.B.

    1994-07-01

    This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronicmore » Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.« less

  20. Sensitivity of an Elekta iView GT a-Si EPID model to delivery errors for pre-treatment verification of IMRT fields.

    PubMed

    Herwiningsih, Sri; Hanlon, Peta; Fielding, Andrew

    2014-12-01

    A Monte Carlo model of an Elekta iViewGT amorphous silicon electronic portal imaging device (a-Si EPID) has been validated for pre-treatment verification of clinical IMRT treatment plans. The simulations involved the use of the BEAMnrc and DOSXYZnrc Monte Carlo codes to predict the response of the iViewGT a-Si EPID model. The predicted EPID images were compared to the measured images obtained from the experiment. The measured EPID images were obtained by delivering a photon beam from an Elekta Synergy linac to the Elekta iViewGT a-Si EPID. The a-Si EPID was used with no additional build-up material. Frame averaged EPID images were acquired and processed using in-house software. The agreement between the predicted and measured images was analyzed using the gamma analysis technique with acceptance criteria of 3 %/3 mm. The results show that the predicted EPID images for four clinical IMRT treatment plans have a good agreement with the measured EPID signal. Three prostate IMRT plans were found to have an average gamma pass rate of more than 95.0 % and a spinal IMRT plan has the average gamma pass rate of 94.3 %. During the period of performing this work a routine MLC calibration was performed and one of the IMRT treatments re-measured with the EPID. A change in the gamma pass rate for one field was observed. This was the motivation for a series of experiments to investigate the sensitivity of the method by introducing delivery errors, MLC position and dosimetric overshoot, into the simulated EPID images. The method was found to be sensitive to 1 mm leaf position errors and 10 % overshoot errors.

  1. Using chemical benchmarking to determine the persistence of chemicals in a Swedish lake.

    PubMed

    Zou, Hongyan; Radke, Michael; Kierkegaard, Amelie; MacLeod, Matthew; McLachlan, Michael S

    2015-02-03

    It is challenging to measure the persistence of chemicals under field conditions. In this work, two approaches for measuring persistence in the field were compared: the chemical mass balance approach, and a novel chemical benchmarking approach. Ten pharmaceuticals, an X-ray contrast agent, and an artificial sweetener were studied in a Swedish lake. Acesulfame K was selected as a benchmark to quantify persistence using the chemical benchmarking approach. The 95% confidence intervals of the half-life for transformation in the lake system ranged from 780-5700 days for carbamazepine to <1-2 days for ketoprofen. The persistence estimates obtained using the benchmarking approach agreed well with those from the mass balance approach (1-21% difference), indicating that chemical benchmarking can be a valid and useful method to measure the persistence of chemicals under field conditions. Compared to the mass balance approach, the benchmarking approach partially or completely eliminates the need to quantify mass flow of chemicals, so it is particularly advantageous when the quantification of mass flow of chemicals is difficult. Furthermore, the benchmarking approach allows for ready comparison and ranking of the persistence of different chemicals.

  2. The art and science of using routine outcome measurement in mental health benchmarking.

    PubMed

    McKay, Roderick; Coombs, Tim; Duerden, David

    2014-02-01

    To report and critique the application of routine outcome measurement data when benchmarking Australian mental health services. The experience of the authors as participants and facilitators of benchmarking activities is augmented by a review of the literature regarding mental health benchmarking in Australia. Although the published literature is limited, in practice, routine outcome measures, in particular the Health of the National Outcomes Scales (HoNOS) family of measures, are used in a variety of benchmarking activities. Use in exploring similarities and differences in consumers between services and the outcomes of care are illustrated. This requires the rigour of science in data management and interpretation, supplemented by the art that comes from clinical experience, a desire to reflect on clinical practice and the flexibility to use incomplete data to explore clinical practice. Routine outcome measurement data can be used in a variety of ways to support mental health benchmarking. With the increasing sophistication of information development in mental health, the opportunity to become involved in benchmarking will continue to increase. The techniques used during benchmarking and the insights gathered may prove useful to support reflection on practice by psychiatrists and other senior mental health clinicians.

  3. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  4. Benchmarking facilities providing care: An international overview of initiatives

    PubMed Central

    Thonon, Frédérique; Watson, Jonathan; Saghatchian, Mahasti

    2015-01-01

    We performed a literature review of existing benchmarking projects of health facilities to explore (1) the rationales for those projects, (2) the motivation for health facilities to participate, (3) the indicators used and (4) the success and threat factors linked to those projects. We studied both peer-reviewed and grey literature. We examined 23 benchmarking projects of different medical specialities. The majority of projects used a mix of structure, process and outcome indicators. For some projects, participants had a direct or indirect financial incentive to participate (such as reimbursement by Medicaid/Medicare or litigation costs related to quality of care). A positive impact was reported for most projects, mainly in terms of improvement of practice and adoption of guidelines and, to a lesser extent, improvement in communication. Only 1 project reported positive impact in terms of clinical outcomes. Success factors and threats are linked to both the benchmarking process (such as organisation of meetings, link with existing projects) and indicators used (such as adjustment for diagnostic-related groups). The results of this review will help coordinators of a benchmarking project to set it up successfully. PMID:26770800

  5. Measurement, Standards, and Peer Benchmarking: One Hospital's Journey.

    PubMed

    Martin, Brian S; Arbore, Mark

    2016-04-01

    Peer-to-peer benchmarking is an important component of rapid-cycle performance improvement in patient safety and quality-improvement efforts. Institutions should carefully examine critical success factors before engagement in peer-to-peer benchmarking in order to maximize growth and change opportunities. Solutions for Patient Safety has proven to be a high-yield engagement for Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center, with measureable improvement in both organizational process and culture. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. COMPETITIVE BIDDING IN MEDICARE ADVANTAGE: EFFECT OF BENCHMARK CHANGES ON PLAN BIDS

    PubMed Central

    Song, Zirui; Landrum, Mary Beth; Chernew, Michael E.

    2013-01-01

    Bidding has been proposed to replace or complement the administered prices in Medicare pays to hospitals and health plans. In 2006, the Medicare Advantage program implemented a competitive bidding system to determine plan payments. In perfectly competitive models, plans bid their costs and thus bids are insensitive to the benchmark. Under many other models of competition, bids respond to changes in the benchmark. We conceptualize the bidding system and use an instrumental variable approach to study the effect of benchmark changes on bids. We use 2006–2010 plan payment data from the Centers for Medicare and Medicaid Services, published county benchmarks, actual realized fee-for-service costs, and Medicare Advantage enrollment. We find that a $1 increase in the benchmark leads to about a $0.53 increase in bids, suggesting that plans in the Medicare Advantage market have meaningful market power. PMID:24308881

  7. Competitive bidding in Medicare Advantage: effect of benchmark changes on plan bids.

    PubMed

    Song, Zirui; Landrum, Mary Beth; Chernew, Michael E

    2013-12-01

    Bidding has been proposed to replace or complement the administered prices that Medicare pays to hospitals and health plans. In 2006, the Medicare Advantage program implemented a competitive bidding system to determine plan payments. In perfectly competitive models, plans bid their costs and thus bids are insensitive to the benchmark. Under many other models of competition, bids respond to changes in the benchmark. We conceptualize the bidding system and use an instrumental variable approach to study the effect of benchmark changes on bids. We use 2006-2010 plan payment data from the Centers for Medicare and Medicaid Services, published county benchmarks, actual realized fee-for-service costs, and Medicare Advantage enrollment. We find that a $1 increase in the benchmark leads to about a $0.53 increase in bids, suggesting that plans in the Medicare Advantage market have meaningful market power. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Influences of MxA gene -88 G/T and IFN-gamma +874 A/T on the natural history of hepatitis B virus infection in an endemic area.

    PubMed

    Peng, X M; Lei, R X; Gu, L; Ma, H H; Xie, Q F; Gao, Z L

    2007-10-01

    The influence of human genetics on the natural history of hepatitis B virus (HBV) infection may be diminished in endemic areas because infection at a young age predisposes to chronic HBV infection. The present study aimed to address this issue through the determination of the influences of single nucleotide polymorphisms (SNPs) of myxovirus resistence-1 (MxA) -88 G/T and interferon (IFN)-gamma +874 A/T on the natural history of HBV infection in endemic regions. One hundred adult patients with self-limiting HBV infection (positive for both anti-HBs and anti-HBc) and 340 adult patients with persistent HBV infection were recruited from southern China, an endemic area with an HBsAg carrier rate of 17.8%. SNPs of MxA -88 G/T and interferon (IFN)-gamma +874 A/T were typed using a protocol based on competitively differentiated polymerase chain reaction. A highly significant difference in the distribution of MxA -88 G/T was observed between those with persistent and self-limiting HBV infections. The latter displayed a lower frequency of the GG genotype (41.0% vs. 52.9%, P = 0.036) and a higher frequency of the TT genotype (16.0% vs. 2.4%, P = 0.000), compared to patients with persistent infection. These differences were not gender- or age-specific. However, a significant distribution difference of IFN-gamma +874 A/T was not observed. Between two groups of patients, respectively, the distribution frequencies of the AA genotype (65.0% vs. 72.8%, P = 0.139) and the TT genotype (2.0% vs. 1.2%, P = 0.894) were found. These results suggest that MxA gene -88 G/T and IFN-gamma +874 A/T behave differently in endemic HBV infections. Further study is necessary to clarify the influences of human genetics on endemic HBV infections.

  9. How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction

    NASA Astrophysics Data System (ADS)

    Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.

    2015-03-01

    The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all

  10. Using Institutional Survey Data to Jump-Start Your Benchmarking Process

    ERIC Educational Resources Information Center

    Chow, Timothy K. C.

    2012-01-01

    Guided by the missions and visions, higher education institutions utilize benchmarking processes to identify better and more efficient ways to carry out their operations. Aside from the initial planning and organization steps involved in benchmarking, a matching or selection step is crucial for identifying other institutions that have good…

  11. Practical Considerations when Using Benchmarking for Accountability in Higher Education

    ERIC Educational Resources Information Center

    Achtemeier, Sue D.; Simpson, Ronald D.

    2005-01-01

    The qualitative study on which this article is based examined key individuals' perceptions, both within a research university community and beyond in its external governing board, of how to improve benchmarking as an accountability method in higher education. Differing understanding of benchmarking revealed practical implications for using it as…

  12. Benchmarking in pathology: development of an activity-based costing model.

    PubMed

    Burnett, Leslie; Wilson, Roger; Pfeffer, Sally; Lowry, John

    2012-12-01

    Benchmarking in Pathology (BiP) allows pathology laboratories to determine the unit cost of all laboratory tests and procedures, and also provides organisational productivity indices allowing comparisons of performance with other BiP participants. We describe 14 years of progressive enhancement to a BiP program, including the implementation of 'avoidable costs' as the accounting basis for allocation of costs rather than previous approaches using 'total costs'. A hierarchical tree-structured activity-based costing model distributes 'avoidable costs' attributable to the pathology activities component of a pathology laboratory operation. The hierarchical tree model permits costs to be allocated across multiple laboratory sites and organisational structures. This has enabled benchmarking on a number of levels, including test profiles and non-testing related workload activities. The development of methods for dealing with variable cost inputs, allocation of indirect costs using imputation techniques, panels of tests, and blood-bank record keeping, have been successfully integrated into the costing model. A variety of laboratory management reports are produced, including the 'cost per test' of each pathology 'test' output. Benchmarking comparisons may be undertaken at any and all of the 'cost per test' and 'cost per Benchmarking Complexity Unit' level, 'discipline/department' (sub-specialty) level, or overall laboratory/site and organisational levels. We have completed development of a national BiP program. An activity-based costing methodology based on avoidable costs overcomes many problems of previous benchmarking studies based on total costs. The use of benchmarking complexity adjustment permits correction for varying test-mix and diagnostic complexity between laboratories. Use of iterative communication strategies with program participants can overcome many obstacles and lead to innovations.

  13. A call for benchmarking transposable element annotation methods.

    PubMed

    Hoen, Douglas R; Hickey, Glenn; Bourque, Guillaume; Casacuberta, Josep; Cordaux, Richard; Feschotte, Cédric; Fiston-Lavier, Anna-Sophie; Hua-Van, Aurélie; Hubley, Robert; Kapusta, Aurélie; Lerat, Emmanuelle; Maumus, Florian; Pollock, David D; Quesneville, Hadi; Smit, Arian; Wheeler, Travis J; Bureau, Thomas E; Blanchette, Mathieu

    2015-01-01

    DNA derived from transposable elements (TEs) constitutes large parts of the genomes of complex eukaryotes, with major impacts not only on genomic research but also on how organisms evolve and function. Although a variety of methods and tools have been developed to detect and annotate TEs, there are as yet no standard benchmarks-that is, no standard way to measure or compare their accuracy. This lack of accuracy assessment calls into question conclusions from a wide range of research that depends explicitly or implicitly on TE annotation. In the absence of standard benchmarks, toolmakers are impeded in improving their tools, annotators cannot properly assess which tools might best suit their needs, and downstream researchers cannot judge how accuracy limitations might impact their studies. We therefore propose that the TE research community create and adopt standard TE annotation benchmarks, and we call for other researchers to join the authors in making this long-overdue effort a success.

  14. Interactive visual optimization and analysis for RFID benchmarking.

    PubMed

    Wu, Yingcai; Chung, Ka-Kei; Qu, Huamin; Yuan, Xiaoru; Cheung, S C

    2009-01-01

    Radio frequency identification (RFID) is a powerful automatic remote identification technique that has wide applications. To facilitate RFID deployment, an RFID benchmarking instrument called aGate has been invented to identify the strengths and weaknesses of different RFID technologies in various environments. However, the data acquired by aGate are usually complex time varying multidimensional 3D volumetric data, which are extremely challenging for engineers to analyze. In this paper, we introduce a set of visualization techniques, namely, parallel coordinate plots, orientation plots, a visual history mechanism, and a 3D spatial viewer, to help RFID engineers analyze benchmark data visually and intuitively. With the techniques, we further introduce two workflow procedures (a visual optimization procedure for finding the optimum reader antenna configuration and a visual analysis procedure for comparing the performance and identifying the flaws of RFID devices) for the RFID benchmarking, with focus on the performance analysis of the aGate system. The usefulness and usability of the system are demonstrated in the user evaluation.

  15. GT0 Explosion Sources for IMS Infrasound Calibration: Charge Design and Yield Estimation from Near-source Observations

    NASA Astrophysics Data System (ADS)

    Gitterman, Y.; Hofstetter, R.

    2014-03-01

    Three large-scale on-surface explosions were conducted by the Geophysical Institute of Israel (GII) at the Sayarim Military Range, Negev desert, Israel: about 82 tons of strong high explosives in August 2009, and two explosions of about 10 and 100 tons of ANFO explosives in January 2011. It was a collaborative effort between Israel, CTBTO, USA and several European countries, with the main goal to provide fully controlled ground truth (GT0) infrasound sources, monitored by extensive observations, for calibration of International Monitoring System (IMS) infrasound stations in Europe, Middle East and Asia. In all shots, the explosives were assembled like a pyramid/hemisphere on dry desert alluvium, with a complicated explosion design, different from the ideal homogenous hemisphere used in similar experiments in the past. Strong boosters and an upward charge detonation scheme were applied to provide more energy radiated to the atmosphere. Under these conditions the evaluation of the actual explosion yield, an important source parameter, is crucial for the GT0 calibration experiment. Audio-visual, air-shock and acoustic records were utilized for interpretation of observed unique blast effects, and for determination of blast wave parameters suited for yield estimation and the associated relationships. High-pressure gauges were deployed at 100-600 m to record air-blast properties, evaluate the efficiency of the charge design and energy generation, and provide a reliable estimation of the charge yield. The yield estimators, based on empirical scaled relations for well-known basic air-blast parameters—the peak pressure, impulse and positive phase duration, as well as on the crater dimensions and seismic magnitudes, were analyzed. A novel empirical scaled relationship for the little-known secondary shock delay was developed, consistent for broad ranges of ANFO charges and distances, which facilitates using this stable and reliable air-blast parameter as a new potential

  16. Metamorphism Near the Dike-Gabbro Transition in the Ocean Crust Based on Preliminary Results from Oman Drilling Project Hole GT3A

    NASA Astrophysics Data System (ADS)

    Manning, C. E.; Nozaka, T.; Harris, M.; Michibayashi, K.; de Obeso, J. C.; D'Andres, J.; Lefay, R.; Leong, J. A. M.; Zeko, D.; Kelemen, P. B.; Teagle, D. A. H.

    2017-12-01

    Oman Drilling Project Hole GT3A intersected 400 m of altered basaltic dikes, gabbros, and diorites. The 100% recovery affords an unprecedented opportunity to study metamorphism and hydrothermal alteration near the dike-gabbro transition in the ocean crust. Hydrothermal alteration is ubiquitous; all rocks are at least moderately altered, and mean alteration intensity is 54%. The earliest alteration in all rock types is background replacement of igneous minerals, some of which occurred at clinopyroxene amphibolite facies, as indicated by brown-green hornblende, calcic plagioclase, and secondary cpx. In addition, background alteration includes greenschist, subgreenschist, and zeolite facies minerals. More extensive alteration is locally observed in halos around veins, patches, and zones related to deformation. Dense networks of hydrothermal veins record a complex history of fluid-rock alteration. During core description, 10,727 individual veins and 371 vein networks were logged in the 400 m of Hole GT3A. The veins displayed a range of textures and connectivities. The total density of veins in Hole GT3A is 26.8 veins m-1. Vein density shows no correlation with depth, but may be higher near dike margins and faults. Vein minerals include amphibole, epidote, quartz, chlorite, prehnite, zeolite (chiefly laumontite) and calcite in a range of combinations. Analysis of crosscutting relations leads to classification of 4 main vein types. In order of generally oldest to youngest these are: amphibole, quartz-epidote-chlorite (QEC), zeolite-prehnite (ZP), and calcite. QEC and ZP vein types may contain any combination of minerals except quartz alone; veins filled only by quartz may occur at any relative time. Macroscopic amphibole veins are rare and show no variation with depth. QEC vein densities appear to be higher (>9.3 veins m-1) in the upper 300 m of GT3A, where dikes predominate. In contrast, there are 5.5 veins m-1 at 300-400 m, where gabbros and diorites are abundant. ZP

  17. Measuring How Benchmark Assessments Affect Student Achievement. Issues & Answers. REL 2007-No. 039

    ERIC Educational Resources Information Center

    Henderson, Susan; Petrosino, Anthony; Guckenburg, Sarah; Hamilton, Stephen

    2007-01-01

    This report examines a Massachusetts pilot program for quarterly benchmark exams in middle-school mathematics, finding that program schools do not show greater gains in student achievement after a year. But that finding might reflect limited data rather than ineffective benchmark assessments. Benchmark assessments are used in many districts…

  18. Colletotrilactam A-D, novel lactams from Colletotrichum gloeosporioides GT-7, a fungal endophyte of Uncaria rhynchophylla.

    PubMed

    Wei, Bo; Yang, Zhong-Duo; Chen, Xiao-Wei; Zhou, Shuang-Yan; Yu, Hai-Tao; Sun, Jing-Yun; Yao, Xiao-Jun; Wang, Yong-Gang; Xue, Hong-Yan

    2016-09-01

    Four novel lactams, colletotrilactam A-D (1-4), along with six known compounds (5-10) were isolated from the culture broth of Colletotrichum gloeosporioides GT-7, a fungal endophyte of Uncaria rhynchophylla. The structures of these compounds were elucidated by comprehensive NMR spectroscopy. Isolates were tested for monoamine oxidase (MAO) inhibitory activity and compound 9 showed potent MAO inhibitory activity with IC50 value of 8.93±0.34μg/mL, when the IC50 value of iproniazid as a standard was 1.80±0.5μg/mL. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Ultracool dwarf benchmarks with Gaia primaries

    NASA Astrophysics Data System (ADS)

    Marocco, F.; Pinfield, D. J.; Cook, N. J.; Zapatero Osorio, M. R.; Montes, D.; Caballero, J. A.; Gálvez-Ortiz, M. C.; Gromadzki, M.; Jones, H. R. A.; Kurtev, R.; Smart, R. L.; Zhang, Z.; Cabrera Lavers, A. L.; García Álvarez, D.; Qi, Z. X.; Rickard, M. J.; Dover, L.

    2017-10-01

    We explore the potential of Gaia for the field of benchmark ultracool/brown dwarf companions, and present the results of an initial search for metal-rich/metal-poor systems. A simulated population of resolved ultracool dwarf companions to Gaia primary stars is generated and assessed. Of the order of ˜24 000 companions should be identifiable outside of the Galactic plane (|b| > 10 deg) with large-scale ground- and space-based surveys including late M, L, T and Y types. Our simulated companion parameter space covers 0.02 ≤ M/M⊙ ≤ 0.1, 0.1 ≤ age/Gyr ≤ 14 and -2.5 ≤ [Fe/H] ≤ 0.5, with systems required to have a false alarm probability <10-4, based on projected separation and expected constraints on common distance, common proper motion and/or common radial velocity. Within this bulk population, we identify smaller target subsets of rarer systems whose collective properties still span the full parameter space of the population, as well as systems containing primary stars that are good age calibrators. Our simulation analysis leads to a series of recommendations for candidate selection and observational follow-up that could identify ˜500 diverse Gaia benchmarks. As a test of the veracity of our methodology and simulations, our initial search uses UKIRT Infrared Deep Sky Survey and Sloan Digital Sky Survey to select secondaries, with the parameters of primaries taken from Tycho-2, Radial Velocity Experiment, Large sky Area Multi-Object fibre Spectroscopic Telescope and Tycho-Gaia Astrometric Solution. We identify and follow up 13 new benchmarks. These include M8-L2 companions, with metallicity constraints ranging in quality, but robust in the range -0.39 ≤ [Fe/H] ≤ +0.36, and with projected physical separation in the range 0.6 < s/kau < 76. Going forward, Gaia offers a very high yield of benchmark systems, from which diverse subsamples may be able to calibrate a range of foundational ultracool/sub-stellar theory and observation.

  20. Learning Through Benchmarking: Developing a Relational, Prospective Approach to Benchmarking ICT in Learning and Teaching

    ERIC Educational Resources Information Center

    Ellis, Robert A.; Moore, Roger R.

    2006-01-01

    This study discusses benchmarking the use of information and communication technologies (ICT) in teaching and learning between two universities with different missions: one an Australian campus-based metropolitan university and the other a British distance-education provider. It argues that the differences notwithstanding, it is possible to…

  1. Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Kruger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  2. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    NASA Astrophysics Data System (ADS)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  3. Benchmarking high performance computing architectures with CMS’ skeleton framework

    NASA Astrophysics Data System (ADS)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  4. Benchmarking: measuring the outcomes of evidence-based practice.

    PubMed

    DeLise, D C; Leasure, A R

    2001-01-01

    Measurement of the outcomes associated with implementation of evidence-based practice changes is becoming increasingly emphasized by multiple health care disciplines. A final step to the process of implementing and sustaining evidence-supported practice changes is that of outcomes evaluation and monitoring. The comparison of outcomes to internal and external measures is known as benchmarking. This article discusses evidence-based practice, provides an overview of outcomes evaluation, and describes the process of benchmarking to improve practice. A case study is used to illustrate this concept.

  5. A benchmark study of the sea-level equation in GIA modelling

    NASA Astrophysics Data System (ADS)

    Martinec, Zdenek; Klemann, Volker; van der Wal, Wouter; Riva, Riccardo; Spada, Giorgio; Simon, Karen; Blank, Bas; Sun, Yu; Melini, Daniele; James, Tom; Bradley, Sarah

    2017-04-01

    The sea-level load in glacial isostatic adjustment (GIA) is described by the so called sea-level equation (SLE), which represents the mass redistribution between ice sheets and oceans on a deforming earth. Various levels of complexity of SLE have been proposed in the past, ranging from a simple mean global sea level (the so-called eustatic sea level) to the load with a deforming ocean bottom, migrating coastlines and a changing shape of the geoid. Several approaches to solve the SLE have been derived, from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, there has been no systematic intercomparison amongst the solvers through which the methods may be validated. The goal of this paper is to present a series of benchmark experiments designed for testing and comparing numerical implementations of the SLE. Our approach starts with simple load cases even though the benchmark will not result in GIA predictions for a realistic loading scenario. In the longer term we aim for a benchmark with a realistic loading scenario, and also for benchmark solutions with rotational feedback. The current benchmark uses an earth model for which Love numbers have been computed and benchmarked in Spada et al (2011). In spite of the significant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found can often be attributed to the different approximations inherent to the various algorithms. Literature G. Spada, V. R. Barletta, V. Klemann, R. E. M. Riva, Z. Martinec, P. Gasperini, B. Lund, D. Wolf, L. L. A. Vermeersen, and M. A. King, 2011. A benchmark study for glacial isostatic adjustment codes. Geophys. J. Int. 185: 106-132 doi:10.1111/j.1365-

  6. Validation of the BUGJEFF311.BOLIB, BUGENDF70.BOLIB and BUGLE-B7 broad-group libraries on the PCA-Replica (H2O/Fe) neutron shielding benchmark experiment

    NASA Astrophysics Data System (ADS)

    Pescarini, Massimo; Orsi, Roberto; Frisoni, Manuela

    2016-03-01

    The PCA-Replica 12/13 (H2O/Fe) neutron shielding benchmark experiment was analysed using the TORT-3.2 3D SN code. PCA-Replica reproduces a PWR ex-core radial geometry with alternate layers of water and steel including a pressure vessel simulator. Three broad-group coupled neutron/photon working cross section libraries in FIDO-ANISN format with the same energy group structure (47 n + 20 γ) and based on different nuclear data were alternatively used: the ENEA BUGJEFF311.BOLIB (JEFF-3.1.1) and UGENDF70.BOLIB (ENDF/B-VII.0) libraries and the ORNL BUGLE-B7 (ENDF/B-VII.0) library. Dosimeter cross sections derived from the IAEA IRDF-2002 dosimetry file were employed. The calculated reaction rates for the Rh-103(n,n')Rh-103m, In-115(n,n')In-115m and S-32(n,p)P-32 threshold activation dosimeters and the calculated neutron spectra are compared with the corresponding experimental results.

  7. SeSBench - An initiative to benchmark reactive transport models for environmental subsurface processes

    NASA Astrophysics Data System (ADS)

    Jacques, Diederik

    2017-04-01

    As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different

  8. The Army Pollution Prevention Program: Improving Performance Through Benchmarking.

    DTIC Science & Technology

    1995-06-01

    Washington, DC 20503. 1. AGENCY USE ONLY (Leave Blank) 2. REPORT DATE June 1995 3. REPORT TYPE AND DATES COVERED Final 4. TITLE AND SUBTITLE...unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (Maximum 200 words) This report investigates the feasibility of using benchmarking as a method for...could use to determine to what degree it should integrate benchmarking with other quality management tools to support the pollution prevention program

  9. Austin Community College Benchmarking Update.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX. Office of Institutional Effectiveness.

    Austin Community College contracted with MGT of America, Inc. in spring 1999 to develop a peer and benchmark (best) practices analysis on key indicators. These indicators were updated in spring 2002 using data from eight Texas community colleges and four non-Texas institutions that represent large, comprehensive, urban community colleges, similar…

  10. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    PubMed

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  11. Benchmarking of Neutron Flux Parameters at the USGS TRIGA Reactor in Lakewood, Colorado

    NASA Astrophysics Data System (ADS)

    Alzaabi, Osama E.

    The USGS TRIGA Reactor (GSTR) located at the Denver Federal Center in Lakewood Colorado provides opportunities to Colorado School of Mines students to do experimental research in the field of neutron activation analysis. The scope of this thesis is to obtain precise knowledge of neutron flux parameters at the GSTR. The Colorado School of Mines Nuclear Physics group intends to develop several research projects at the GSTR, which requires the precise knowledge of neutron fluxes and energy distributions in several irradiation locations. The fuel burn-up of the new GSTR fuel configuration and the thermal neutron flux of the core were recalculated since the GSTR core configuration had been changed with the addition of two new fuel elements. Therefore, a MCNP software package was used to incorporate the burn up of reactor fuel and to determine the neutron flux at different irradiation locations and at flux monitoring bores. These simulation results were compared with neutron activation analysis results using activated diluted gold wires. A well calibrated and stable germanium detector setup as well as fourteen samplers were designed and built to achieve accuracy in the measurement of the neutron flux. Furthermore, the flux monitoring bores of the GSTR core were used for the first time to measure neutron flux experimentally and to compare to MCNP simulation. In addition, International Atomic Energy Agency (IAEA) standard materials were used along with USGS national standard materials in a previously well calibrated irradiation location to benchmark simulation, germanium detector calibration and sample measurements to international standards.

  12. Numerical modeling of the radionuclide water pathway with HYDRUS and comparison with the IAEA model of SR 44.

    PubMed

    Merk, Rainer

    2012-02-01

    This study depicts a theoretical experiment in which the radionuclide transport through the porous material of a landfill consisting of concrete rubble (e.g., from the decommissioning of nuclear power plants) and the subsequent migration through the vadose zone and aquifer to a model well is calculated by means of the software HYDRUS-1D (Simunek et al., 2008). The radionuclides originally contained within the rubble become dissolved due to leaching caused by infiltrated rainwater. The resulting well-water contamination (in Bq/L) is calculated numerically as a function of time and location and compared with the outcome of a simplified analytic model for the groundwater pathway published by the IAEA (2005). Identical model parameters are considered. The main objective of the present work is to evaluate the predictive capacity of the more simple IAEA model using HYDRUS-1D as a reference. For most of the radionuclides considered (e.g., ¹²⁹I, and ²³⁹Pu), results from applying the IAEA model were found to be comparable to results from the more elaborate HYDRUS modeling, provided the underlying parameter values are comparable. However, the IAEA model appears to underestimate the effects resulting from, for example, high nuclide mobility, short half-life, or short-term variations in the water infiltration. The present results indicate that the IAEA model is suited for screening calculations and general recommendation purposes. However, the analysis of a specific site should be accompanied by detailed HYDRUS computer simulations. In all models considered, the calculation outcome largely depends on the choice of the sorption parameter K(d). Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. 7 CFR 1469.7 - Benchmark condition inventory and conservation stewardship plan.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Benchmark condition inventory and conservation...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS CONSERVATION SECURITY PROGRAM General Provisions § 1469.7 Benchmark condition inventory and conservation stewardship...

  14. Simulation-based comprehensive benchmarking of RNA-seq aligners

    PubMed Central

    Baruzzo, Giacomo; Hayer, Katharina E; Kim, Eun Ji; Di Camillo, Barbara; FitzGerald, Garret A; Grant, Gregory R

    2018-01-01

    Alignment is the first step in most RNA-seq analysis pipelines, and the accuracy of downstream analyses depends heavily on it. Unlike most steps in the pipeline, alignment is particularly amenable to benchmarking with simulated data. We performed a comprehensive benchmarking of 14 common splice-aware aligners for base, read, and exon junction-level accuracy and compared default with optimized parameters. We found that performance varied by genome complexity, and accuracy and popularity were poorly correlated. The most widely cited tool underperforms for most metrics, particularly when using default settings. PMID:27941783

  15. Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems

    NASA Astrophysics Data System (ADS)

    Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald

    A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.

  16. Evaluating the Effect of Labeled Benchmarks on Children's Number Line Estimation Performance and Strategy Use.

    PubMed

    Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen

    2017-01-01

    Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that

  17. Encoding color information for visual tracking: Algorithms and benchmark.

    PubMed

    Liang, Pengpeng; Blasch, Erik; Ling, Haibin

    2015-12-01

    While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

  18. Benchmarking high performance computing architectures with CMS’ skeleton framework

    DOE PAGES

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-11-23

    Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less

  19. Nonparametric estimation of benchmark doses in environmental risk assessment

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  20. Benchmarking high performance computing architectures with CMS’ skeleton framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less

  1. The evolution and impact of the International Atomic Energy Agency (IAEA) program on radiation and tissue banking in Asia and the Pacific region.

    PubMed

    Morales Pedraza, Jorge; Phillips, Glyn O

    2009-05-01

    The Asia and the Pacific region was within the IAEA program on radiation and tissue banking, the most active region. Most of the tissue banks in the Asia and the Pacific region were developed during the late 1980s and 1990s. The initial number of tissue banks established or supported by the IAEA program in the framework of the RCA Agreement for Asia and the Pacific region was 18. At the end of 2006, the number of tissue banks participating, in one way or another in the IAEA program was 59. Since the beginning of the implementation of the IAEA program in Asia and the Pacific region 63,537 amnion and 44,282 bone allografts were produced and 57,683 amnion and 36,388 bone allografts were used. The main impact of the IAEA program in the region was the following: the establishment or consolidation of at least 59 tissue banks in 15 countries in the region (the IAEA supported directly 16 of these banks); the improvement on the quality and safety of tissues procured and produced in the region reaching international standards; the implementation of eight national projects, two regional projects and two interregional projects; the elaboration of International Standards, a Code of Practice and a Public Awareness Strategies and, the application of quality control and quality assurances programs in all participating tissue banks.

  2. Evaluation of Monocyte to High-Density Lipoprotein Cholesterol Ratio in the Presence and Severity of Metabolic Syndrome.

    PubMed

    Uslu, Ali Ugur; Sekin, Yahya; Tarhan, Gulten; Canakcı, Nuray; Gunduz, Mehmet; Karagulle, Mustafa

    2017-01-01

    Monocyte to high-density lipoprotein cholesterol ratio (MHR) is a systemic inflammatory marker, and recently, it has been used quite commonly for the assessment of inflammation in cardiovascular disorders. The aim of the present study is to investigate the relevance of MHR as a marker to assess metabolic syndrome (MetS) and MetS severity in clinical practice. A total of 147 patients with MetS who were diagnosed according to National Cholesterol Education Program Adult Treatment Panel III criteria and 134 healthy controls, matched for age and gender, were included in our retrospective study. MHR values were 13.15 ± 6.07 for patients with MetS and 9.74 ± 5.24 for the control group. MHR values of the patients were found to be statistically significantly higher than the control group ( P < .0001). MHR showed a significantly positive correlation with the severity of MetS ( r = .429; P < .0001). When patients with MetS were assessed with MHR in the study population, receiver-operating characteristic curve analysis yielded a cutoff value of 9.36 with a sensitivity of 72%, a specificity of 61%, and a P value <.0001. In logistic regression analyses of MetS with several variables, MHR remained as an independent predictor of MetS (95% CI: 0.721-0.945, P = .005). MHR might be an available and useful inflammatory marker to evaluate patients with MetS and disease severity.

  3. Co-ordination of the International Network of Nuclear Structure and Decay Data Evaluators; Summary Report of an IAEA Technical Meeting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abriola, D.; Tuli, J.

    The IAEA Nuclear Data Section convened the 18th meeting of the International Network of Nuclear Structure and Decay Data Evaluators at the IAEA Headquarters, Vienna, 23 to 27 March 2009. This meeting was attended by 22 scientists from 14 Member States, plus IAEA staff, concerned with the compilation, evaluation and dissemination of nuclear structure and decay data. A summary of the meeting, recommendations/conclusions, data centre reports, and various proposals considered, modified and agreed by the participants are contained within this document. The International Network of Nuclear Structure and Decay Data (NSDD) Evaluators holds biennial meetings under the auspices of themore » IAEA, and consists of evaluation groups and data service centres in several countries. This network has the objective of providing up-to-date nuclear structure and decay data for all known nuclides by evaluating all existing experimental data. Data resulting from this international evaluation collaboration is included in the Evaluated Nuclear Structure Data File (ENSDF) and published in the journals Nuclear Physics A and Nuclear Data Sheets (NDS).« less

  4. Higher Education Ranking and Leagues Tables: Lessons Learned from Benchmarking

    ERIC Educational Resources Information Center

    Proulx, Roland

    2007-01-01

    The paper intends to contribute to the debate on ranking and league tables by adopting a critical approach to ranking methodologies from the point of view of a university benchmarking exercise. The absence of a strict benchmarking exercise in the ranking process has been, in the opinion of the author, one of the major problems encountered in the…

  5. A review of the International Atomic Energy Agency (IAEA) international standards for tissue banks.

    PubMed

    Morales Pedraza, Jorge; Lobo Gajiwala, Astrid; Martinez Pardo, María Esther

    2012-03-01

    The IAEA International Standards for Tissue Banks published in 2003 were based on the Standards then currently in use in the USA and the European Union, among others, and reflect the best practices associated with the operation of a tissue bank. They cover legal, ethical and regulatory controls as well as requirements and procedures from donor selection and tissue retrieval to processing and distribution of finished tissue for clinical use. The application of these standards allows tissue banks to operate with the current good tissue practice, thereby providing grafts of high quality that satisfy the national and international demand for safe and biologically useful grafts. The objective of this article is to review the IAEA Standards and recommend new topics that could improve the current version.

  6. Alternative industrial carbon emissions benchmark based on input-output analysis

    NASA Astrophysics Data System (ADS)

    Han, Mengyao; Ji, Xi

    2016-12-01

    Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.

  7. Benchmarking health IT among OECD countries: better data for better policy

    PubMed Central

    Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K

    2014-01-01

    Objective To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. Materials and methods The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. Results The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Discussion Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. Conclusions As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this. PMID:23721983

  8. Benchmarking health IT among OECD countries: better data for better policy.

    PubMed

    Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K

    2014-01-01

    To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this.

  9. Performance Against WELCOA's Worksite Health Promotion Benchmarks Across Years Among Selected US Organizations.

    PubMed

    Weaver, GracieLee M; Mendenhall, Brandon N; Hunnicutt, David; Picarella, Ryan; Leffelman, Brittanie; Perko, Michael; Bibeau, Daniel L

    2018-05-01

    The purpose of this study was to quantify the performance of organizations' worksite health promotion (WHP) activities against the benchmarking criteria included in the Well Workplace Checklist (WWC). The Wellness Council of America (WELCOA) developed a tool to assess WHP with its 100-item WWC, which represents WELCOA's 7 performance benchmarks. Workplaces. This study includes a convenience sample of organizations who completed the checklist from 2008 to 2015. The sample size was 4643 entries from US organizations. The WWC includes demographic questions, general questions about WHP programs, and scales to measure the performance against the WELCOA 7 benchmarks. Descriptive analyses of WWC items were completed separately for each year of the study period. The majority of the organizations represented each year were multisite, multishift, medium- to large-sized companies mostly in the services industry. Despite yearly changes in participating organizations, results across the WELCOA 7 benchmark scores were consistent year to year. Across all years, benchmarks that organizations performed the lowest were senior-level support, data collection, and programming; wellness teams and supportive environments were the highest scoring benchmarks. In an era marked with economic swings and health-care reform, it appears that organizations are staying consistent in their performance across these benchmarks. The WWC could be useful for organizations, practitioners, and researchers in assessing the quality of WHP programs.

  10. Staffing benchmarks for histology laboratories.

    PubMed

    Buesa, René J

    2010-06-01

    This article summarizes annual workloads for staff positions and work flow productivity (WFP) values from 247 human pathology, 31 veterinary, and 35 forensic histology laboratories (histolabs). There are single summaries for veterinary and forensic histolabs, but the data from human pathology are divided into 2 groups because of statistically significant differences between those from Spain and 6 Hispano American countries (SpHA) and the rest from the United States and 17 other countries. The differences reflect the way the work is organized, but the histotechnicians and histotechnologists (histotechs) from SpHA have the same task productivity levels as those from any other country (Buesa RJ. Productivity standards for histology laboratories. [YADPA 50,552]). The information is also segregated by groups of histolabs with increasing workloads; this aspect also showed statistical differences. The information from human pathology histolabs other than those from SpHA were used to calculate staffing annual benchmarks for pathologists (from 3700 to 6500 cases depending on the histolab annual workload), pathology assistants (20,000 cases), staff histotechs (9900 blocks), cutting histotechs (15,000 blocks), histotechs doing special procedures (9500 slides if done manually or 15,000 slides with autostainers), dieners (100 autopsies), laboratory aides and transcriptionists (15,000 cases each), and secretaries (20,000 cases). There are also recommendations about workload limits for supervisory staff (lead techs and supervisors) and when neither is required. Each benchmark was related with the productivity of the different tasks they include (Buesa RJ. Productivity standards for histology laboratories. [YADPA 50,552]) to calculate the hours per year required to complete them. The relationship between workload and benchmarks allows the director of pathology to determine the staff needed for the efficient operation of the histolab.

  11. Sensitivity Analysis of OECD Benchmark Tests in BISON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less

  12. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  13. Processor Emulator with Benchmark Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  14. Equilibrium Partitioning Sediment Benchmarks (ESBs) for the ...

    EPA Pesticide Factsheets

    This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it accounts for the varying bioavailability of chemicals in different sediments and allows for the incorporation of the appropriate biological effects concentration. This provides for the derivation of benchmarks that are causally linked to the specific chemical, applicable across sediments, and appropriately protective of benthic organisms.  This equilibrium partitioning sediment benchmark (ESB) document was prepared by scientists from the Atlantic Ecology Division, Mid-Continent Ecology Division, and Western Ecology Division, the Office of Water, and private consultants. The document describes procedures to determine the interstitial water concentrations of nonionic organic chemicals in contaminated sediments. Based on these concentrations, guidance is provided on the derivation of toxic units to assess whether the sediments are likely to cause adverse effects to benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it is based on the concentrations of chemical(s) that are known to be harmful and bioavailable in the environment.  This document, and five others published over the last nine years, will be useful for the Program Offices, including Superfund, a

  15. Benchmarking in the Two-Year Public Postsecondary Sector: A Learning Process

    ERIC Educational Resources Information Center

    Mitchell, Jennevieve

    2015-01-01

    The recession prompted reflection on how resource allocation decisions contribute to the performance of community colleges in the United States. Private benchmarking initiatives, most notably those established by the National Higher Education Benchmarking Institute, can only partially begin to address this question. Empirical and financial…

  16. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Mark; Brown, Jed; Shalf, John

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  17. Do Medicare Advantage Plans Minimize Costs? Investigating the Relationship Between Benchmarks, Costs, and Rebates.

    PubMed

    Zuckerman, Stephen; Skopec, Laura; Guterman, Stuart

    2017-12-01

    Medicare Advantage (MA), the program that allows people to receive their Medicare benefits through private health plans, uses a benchmark-and-bidding system to induce plans to provide benefits at lower costs. However, prior research suggests medical costs, profits, and other plan costs are not as low under this system as they might otherwise be. To examine how well the current system encourages MA plans to bid their lowest cost by examining the relationship between costs and bonuses (rebates) and the benchmarks Medicare uses in determining plan payments. Regression analysis using 2015 data for HMO and local PPO plans. Costs and rebates are higher for MA plans in areas with higher benchmarks, and plan costs vary less than benchmarks do. A one-dollar increase in benchmarks is associated with 32-cent-higher plan costs and a 52-cent-higher rebate, even when controlling for market and plan factors that can affect costs. This suggests the current benchmark-and-bidding system allows plans to bid higher than local input prices and other market conditions would seem to warrant. To incentivize MA plans to maximize efficiency and minimize costs, Medicare could change the way benchmarks are set or used.

  18. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    NASA Technical Reports Server (NTRS)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  19. Developing a Benchmarking Process in Perfusion: A Report of the Perfusion Downunder Collaboration

    PubMed Central

    Baker, Robert A.; Newland, Richard F.; Fenton, Carmel; McDonald, Michael; Willcox, Timothy W.; Merry, Alan F.

    2012-01-01

    Abstract: Improving and understanding clinical practice is an appropriate goal for the perfusion community. The Perfusion Downunder Collaboration has established a multi-center perfusion focused database aimed at achieving these goals through the development of quantitative quality indicators for clinical improvement through benchmarking. Data were collected using the Perfusion Downunder Collaboration database from procedures performed in eight Australian and New Zealand cardiac centers between March 2007 and February 2011. At the Perfusion Downunder Meeting in 2010, it was agreed by consensus, to report quality indicators (QI) for glucose level, arterial outlet temperature, and pCO2 management during cardiopulmonary bypass. The values chosen for each QI were: blood glucose ≥4 mmol/L and ≤10 mmol/L; arterial outlet temperature ≤37°C; and arterial blood gas pCO2 ≥ 35 and ≤45 mmHg. The QI data were used to derive benchmarks using the Achievable Benchmark of Care (ABC™) methodology to identify the incidence of QIs at the best performing centers. Five thousand four hundred and sixty-five procedures were evaluated to derive QI and benchmark data. The incidence of the blood glucose QI ranged from 37–96% of procedures, with a benchmark value of 90%. The arterial outlet temperature QI occurred in 16–98% of procedures with the benchmark of 94%; while the arterial pCO2 QI occurred in 21–91%, with the benchmark value of 80%. We have derived QIs and benchmark calculations for the management of several key aspects of cardiopulmonary bypass to provide a platform for improving the quality of perfusion practice. PMID:22730861

  20. Benchmarking in health care: using the Internet to identify resources.

    PubMed

    Lingle, V A

    1996-01-01

    Benchmarking is a quality improvement tool that is increasingly being applied to the health care field and to the libraries within that field. Using mostly resources assessible at no charge through the Internet, a collection of information was compiled on benchmarking and its applications. Sources could be identified in several formats including books, journals and articles, multi-media materials, and organizations.