Science.gov

Sample records for integral benchmark archive

  1. Shielding Integral Benchmark Archive and Database (SINBAD)

    SciTech Connect

    Kirk, Bernadette Lugue; Grove, Robert E; Kodeli, I.; Sartori, Enrico; Gulliford, J.

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  2. Shielding integral benchmark archive and database (SINBAD)

    SciTech Connect

    Kirk, B.L.; Grove, R.E.; Kodeli, I.; Gulliford, J.; Sartori, E.

    2011-07-01

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  3. Recent accelerator experiments updates in Shielding INtegral Benchmark Archive Database (SINBAD)

    NASA Astrophysics Data System (ADS)

    Kodeli, I.; Sartori, E.; Kirk, B.

    2006-06-01

    SINBAD is an internationally established set of radiation shielding and dosimetry data relative to experiments relevant in reactor shielding, fusion blanket neutronics and accelerator shielding. In addition to the characterization of the radiation source, it describes shielding materials and instrumentation and the relevant detectors. The experimental results, be it dose, reaction rates or unfolded spectra are presented in tabular ASCII form that can easily be exported to different computer environments for further use. Most sets in SINBAD also contain the computer model used for the interpretation of the experiment and, where available, results from uncertainty analysis. The set of primary documents used for the benchmark compilation and evaluation are provided in computer readable form. SINBAD is available free of charge from RSICC and from the NEA Data Bank.

  4. Applications of Integral Benchmark Data

    SciTech Connect

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  5. The philosophy of benchmark testing a standards-based picture archiving and communications system.

    PubMed

    Richardson, N E; Thomas, J A; Lyche, D K; Romlein, J; Norton, G S; Dolecek, Q E

    1999-05-01

    The Department of Defense issued its requirements for a Digital Imaging Network-Picture Archiving and Communications System (DIN-PACS) in a Request for Proposals (RFP) to industry in January 1997, with subsequent contracts being awarded in November 1997 to the Agfa Division of Bayer and IBM Global Government Industry. The Government's technical evaluation process consisted of evaluating a written technical proposal as well as conducting a benchmark test of each proposed system at the vendor's test facility. The purpose of benchmark testing was to evaluate the performance of the fully integrated system in a simulated operational environment. The benchmark test procedures and test equipment were developed through a joint effort between the Government, academic institutions, and private consultants. Herein the authors discuss the resources required and the methods used to benchmark test a standards-based PACS. PMID:10342251

  6. Developing integrated benchmarks for DOE performance measurement

    SciTech Connect

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  7. Updates to the integrated protein-protein interaction benchmarks: Docking benchmark version 5 and affinity benchmark version 2

    PubMed Central

    Vreven, Thom; Moal, Iain H.; Vangone, Anna; Pierce, Brian G.; Kastritis, Panagiotis L.; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M.J.J.; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were added to the docking benchmark, 35 of which have experimentally-measured binding affinities. These updated docking and affinity benchmarks now contain 230 and 179 entries, respectively. In particular, the number of antibody-antigen complexes has increased significantly, by 67% and 74% in the docking and affinity benchmarks, respectively. We tested previously developed docking and affinity prediction algorithms on the new cases. Considering only the top ten docking predictions per benchmark case, a prediction accuracy of 38% is achieved on all 55 cases, and up to 50% for the 32 rigid-body cases only. Predicted affinity scores are found to correlate with experimental binding energies up to r=0.52 overall, and r=0.72 for the rigid complexes. PMID:26231283

  8. Benchmark integration test for the Advanced Integration Matrix (AIM)

    NASA Astrophysics Data System (ADS)

    Paul, H.; Labuda, L.

    The Advanced Integration Matrix (AIM) studies and solves systems-level integration issues for exploration missions beyond Low Earth Orbit (LEO) through the design and development of a ground-based facility for developing revolutionary integrated systems for joint human-robotic missions. This systems integration approach to addressing human capability barriers will yield validation of advanced concepts and technologies, establish baselines for further development, and help identify opportunities for system-level breakthroughs. Early ground-based testing of mission capability will identify successful system implementations and operations, hidden risks and hazards, unexpected system and operations interactions, mission mass and operational savings, and can evaluate solutions to requirements-driving questions; all of which will enable NASA to develop more effective, lower risk systems and more reliable cost estimates for future missions. This paper describes the first in the series of integration tests proposed for AIM (the Benchmark Test) which will bring in partners and technology, evaluate the study processes of the project, and develop metrics for success.

  9. Melcor benchmarking against integral severe fuel damage tests

    SciTech Connect

    Madni, I.K.

    1995-09-01

    MELCOR is a fully integrated computer code that models all phases of the progression of severe accidents in light water reactor nuclear power plants, and is being developed for the U.S. Nuclear Regulatory Commission (NRC) by Sandia National Laboratories (SNL). Brookhaven National Laboratory (BNL) has a program with the NRC to provide independent assessment of MELCOR, and a very important part of this program is to benchmark MELCOR against experimental data from integral severe fuel damage tests and predictions of that data from more mechanistic codes such as SCDAP or SCDAP/RELAP5. Benchmarking analyses with MELCOR have been carried out at BNL for five integral severe fuel damage tests, namely, PBF SFD 1-1, SFD 14, and NRU FLHT-2, analyses, and their role in identifying areas of modeling strengths and weaknesses in MELCOR.

  10. RECENT ADDITIONS OF CRITICALITY SAFETY RELATED INTEGRAL BENCHMARK DATA TO THE ICSBEP AND IRPHEP HANDBOOKS

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Sartori

    2009-09-01

    High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions.

  11. Integrating distributed data archives in seismology: the European Integrated waveform Data Archives (EIDA)

    NASA Astrophysics Data System (ADS)

    Sleeman, Reinoud; Hanka, Winfried; Clinton, John; van Eck, Torild; Trani, Luca

    2013-04-01

    ORFEUS is the non-profit foundation that coordinates and promotes digital broadband seismology in Europe. Since 1987 the ORFEUS Data Center (ODC) has been its jointly funded data center. However, within the last decade we have seen an exponential growth of high quality digital waveform data relevant for seismological and general geoscience research. In addition to the rapid expansion in number and density of broadband seismic networks this growth is fuelled by data collected from other sensor types (strong motion, short period) and deployment types (aftershock arrays, temporary field campaigns, OBS). As a consequence, ORFEUS revised its data archiving infrastructure and organization, a major component of this is the formal establishment of the European Integrated waveform Data Archives (EIDA). Within the NERIES and NERA EC projects GFZ has taken the lead in developing ArcLink as a tool to provide uniform access to distributed seismological waveform data archives. The new suite of software and services provides the technical basis of EIDA. To ensure that those developments will become sustainable, an EIDA group has been formed within ORFEUS. This founding group of EIDA nodes, formed in 2013, will be responsible for steering and maintaining the technical developments and organization of an effective operational distributed waveform data archive for seismology in Europe. The EIDA Founding nodes are: ODC/ORFEUS, GEOFON/GFZ/Germany, SED/Switzerland, RESIF/CNRS-INSU/France, INGV/Italy and BGR/Germany. These represent EIDA nodes that have committed themselves within ORFEUS to manage EIDA, that is, to maintain and develop EIDA into a stable sustainable research infrastructure. This task involves a number of challenges with regard to quality and metadata maintenance, but also to provide efficient and uncomplicated data access for users. This also includes effective global archive synchronization with developments within the International Federation of Digital Seismograph

  12. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    SciTech Connect

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  13. Montreal Archive of Sleep Studies: an open-access resource for instrument benchmarking and exploratory research.

    PubMed

    O'Reilly, Christian; Gosselin, Nadia; Carrier, Julie; Nielsen, Tore

    2014-12-01

    Manual processing of sleep recordings is extremely time-consuming. Efforts to automate this process have shown promising results, but automatic systems are generally evaluated on private databases, not allowing accurate cross-validation with other systems. In lacking a common benchmark, the relative performances of different systems are not compared easily and advances are compromised. To address this fundamental methodological impediment to sleep study, we propose an open-access database of polysomnographic biosignals. To build this database, whole-night recordings from 200 participants [97 males (aged 42.9 ± 19.8 years) and 103 females (aged 38.3 ± 18.9 years); age range: 18-76 years] were pooled from eight different research protocols performed in three different hospital-based sleep laboratories. All recordings feature a sampling frequency of 256 Hz and an electroencephalography (EEG) montage of 4-20 channels plus standard electro-oculography (EOG), electromyography (EMG), electrocardiography (ECG) and respiratory signals. Access to the database can be obtained through the Montreal Archive of Sleep Studies (MASS) website (http://www.ceams-carsm.ca/en/MASS), and requires only affiliation with a research institution and prior approval by the applicant's local ethical review board. Providing the research community with access to this free and open sleep database is expected to facilitate the development and cross-validation of sleep analysis automation systems. It is also expected that such a shared resource will be a catalyst for cross-centre collaborations on difficult topics such as improving inter-rater agreement on sleep stage scoring. PMID:24909981

  14. Integral Benchmark Data for Nuclear Data Testing Through the ICSBEP & IRPhEP

    SciTech Connect

    J. Blair Briggs; John D. Bess; Jim Gulliford; Ian Hill

    2013-10-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the nuclear data community at ND2007. Since ND2007, integral benchmark data that are available for nuclear data testing have increased significantly. The status of the ICSBEP and the IRPhEP is discussed and selected benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2007 are highlighted.

  15. Dynamic Data Management Based on Archival Process Integration at the Centre for Environmental Data Archival

    NASA Astrophysics Data System (ADS)

    Conway, Esther; Waterfall, Alison; Pepler, Sam; Newey, Charles

    2015-04-01

    In this paper we decribe a business process modelling approach to the integration of exisiting archival activities. We provide a high level overview of existing practice and discuss how procedures can be extended and supported through the description of preservation state. The aim of which is to faciliate the dynamic controlled management of scientific data through its lifecycle. The main types of archival processes considered are: • Management processes that govern the operation of an archive. These management processes include archival governance (preservation state management, selection of archival candidates and strategic management) . • Operational processes that constitute the core activities of the archive which maintain the value of research assets. These operational processes are the acquisition, ingestion, deletion, generation of metadata and preservation actvities, • Supporting processes, which include planning, risk analysis and monitoring of the community/preservation environment. We then proceed by describing the feasability testing of extended risk management and planning procedures which integrate current practices. This was done through the CEDA Archival Format Audit which inspected British Atmospherics Data Centre and National Earth Observation Data Centre Archival holdings. These holdings are extensive, comprising of around 2PB of data and 137 million individual files which were analysed and characterised in terms of format based risk. We are then able to present an overview of the risk burden faced by a large scale archive attempting to maintain the usability of heterogeneous environmental data sets. We conclude by presenting a dynamic data management information model that is capable of describing the preservation state of archival holdings throughout the data lifecycle. We provide discussion of the following core model entities and their relationships: • Aspirational entities, which include Data Entity definitions and their associated

  16. AITAS : Assembly Integration Test data Archiving System

    NASA Astrophysics Data System (ADS)

    Meunier, J.-C.; Madec, F.; Vigan, A.; Nowak, M.; Irdis Team

    2012-09-01

    The aim of AITAS is to automatically archive and index data acquired from an instrument during test and validation phase. AITAS product has been built initially to fill the needs of the IRDIS-SPHERE (ESO-VLT) project to archive and organise data during the test phase. We have developed robust and secure tools to retrieve data from the acquisition workstation, build an archive and index data by keywords to provide search functionality among large amount of data. This import of data is done automatically after setting some configuration files. In addition APIs and GUI client have been developed in order to retrieve data from a generic interface and use them in test processing phase. The end user is able to select data and retrieve using any criteria listed in the metadata of the files. One main advantage of this system is that it is intrinsically generic so that it can be used in instrument project in astrophysical laboratories without any further modifications.

  17. Beyond Conventional Benchmarking: Integrating Ideal Visions, Strategic Planning, Reengineering, and Quality Management.

    ERIC Educational Resources Information Center

    Kaufman, Roger; Swart, William

    1995-01-01

    Discussion of quality management and approaches to organizational success focuses on benchmarking and the integration of other approaches including strategic planning, ideal visions, and reengineering. Topics include performance improvement; decision making; internal benchmarking; and quality targets for the organization, clients, and societal…

  18. INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-04-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, but focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.

  19. Study on Integrated Pest Management for Libraries and Archives.

    ERIC Educational Resources Information Center

    Parker, Thomas A.

    This study addresses the problems caused by the major insect and rodent pests and molds and mildews in libraries and archives; the damage they do to collections; and techniques for their prevention and control. Guidelines are also provided for the development and initiation of an Integrated Pest Management program for facilities housing library…

  20. An integrated data envelopment analysis-artificial neural network approach for benchmarking of bank branches

    NASA Astrophysics Data System (ADS)

    Shokrollahpour, Elsa; Hosseinzadeh Lotfi, Farhad; Zandieh, Mostafa

    2016-02-01

    Efficiency and quality of services are crucial to today's banking industries. The competition in this section has become increasingly intense, as a result of fast improvements in Technology. Therefore, performance analysis of the banking sectors attracts more attention these days. Even though data envelopment analysis (DEA) is a pioneer approach in the literature as of an efficiency measurement tool and finding benchmarks, it is on the other hand unable to demonstrate the possible future benchmarks. The drawback to it could be that the benchmarks it provides us with, may still be less efficient compared to the more advanced future benchmarks. To cover for this weakness, artificial neural network is integrated with DEA in this paper to calculate the relative efficiency and more reliable benchmarks of one of the Iranian commercial bank branches. Therefore, each branch could have a strategy to improve the efficiency and eliminate the cause of inefficiencies based on a 5-year time forecast.

  1. Conflation and integration of archived geologic maps and associated uncertainties

    USGS Publications Warehouse

    Shoberg, Thomas G.

    2016-01-01

    Old, archived geologic maps are often available with little or no associated metadata. This creates special problems in terms of extracting their data to use with a modern database. This research focuses on some problems and uncertainties associated with conflating older geologic maps in regions where modern geologic maps are, as yet, non-existent as well as vertically integrating the conflated maps with layers of modern GIS data (in this case, The National Map of the U.S. Geological Survey). Ste. Genevieve County, Missouri was chosen as the test area. It is covered by six archived geologic maps constructed in the years between 1928 and 1994. Conflating these maps results in a map that is internally consistent with these six maps, is digitally integrated with hydrography, elevation and orthoimagery data, and has a 95% confidence interval useful for further data set integration.

  2. Toward Automated Benchmarking of Atomistic Force Fields: Neat Liquid Densities and Static Dielectric Constants from the ThermoML Data Archive.

    PubMed

    Beauchamp, Kyle A; Behr, Julie M; Rustenburg, Ariën S; Bayly, Christopher I; Kroenlein, Kenneth; Chodera, John D

    2015-10-01

    Atomistic molecular simulations are a powerful way to make quantitative predictions, but the accuracy of these predictions depends entirely on the quality of the force field employed. Although experimental measurements of fundamental physical properties offer a straightforward approach for evaluating force field quality, the bulk of this information has been tied up in formats that are not machine-readable. Compiling benchmark data sets of physical properties from non-machine-readable sources requires substantial human effort and is prone to the accumulation of human errors, hindering the development of reproducible benchmarks of force-field accuracy. Here, we examine the feasibility of benchmarking atomistic force fields against the NIST ThermoML data archive of physicochemical measurements, which aggregates thousands of experimental measurements in a portable, machine-readable, self-annotating IUPAC-standard format. As a proof of concept, we present a detailed benchmark of the generalized Amber small-molecule force field (GAFF) using the AM1-BCC charge model against experimental measurements (specifically, bulk liquid densities and static dielectric constants at ambient pressure) automatically extracted from the archive and discuss the extent of data available for use in larger scale (or continuously performed) benchmarks. The results of even this limited initial benchmark highlight a general problem with fixed-charge force fields in the representation low-dielectric environments, such as those seen in binding cavities or biological membranes. PMID:26339862

  3. Reactor benchmarks and integral data testing and feedback into ENDF/B-VI

    SciTech Connect

    McKnight, R.D.; Williams, M.L.

    1992-11-01

    The role of integral data testing and its feedback into the ENDF/B evaluated nuclear data files are reviewed. The use of the CSEWG reactor benchmarks in the data testing process is discussed and selected results based on ENDF/B Version VI data are presented. Finally, recommendations are given to improve the implementation in future integral data testing of ENDF/B.

  4. Reactor benchmarks and integral data testing and feedback into ENDF/B-VI

    SciTech Connect

    McKnight, R.D. ); Williams, M.L. . Nuclear Science Center)

    1992-01-01

    The role of integral data testing and its feedback into the ENDF/B evaluated nuclear data files are reviewed. The use of the CSEWG reactor benchmarks in the data testing process is discussed and selected results based on ENDF/B Version VI data are presented. Finally, recommendations are given to improve the implementation in future integral data testing of ENDF/B.

  5. GOLIA: An INTEGRAL archive at INAF-IASF Milano

    NASA Astrophysics Data System (ADS)

    Paizis, A.; Mereghetti, S.; Götz, D.; Fiorini, M.; Gaber, M.; Regni Ponzeveroni, R.; Sidoli, L.; Vercellone, S.

    2013-02-01

    We present the archive of the INTEGRAL data developed and maintained at INAF-IASF Milano. The archive comprises all the public data currently available (revolutions 0026-1079, i.e., December 2002-August 2011). INTEGRAL data are downloaded from the ISDC Data Centre for Astrophysics, Geneva, on a regular basis as they become public and a customized analysis using the OSA 9.0 software package is routinely performed on the IBIS/ISGRI data. The scientific products include individual pointing images and the associated detected source lists in the 17-30, 30-50, 17-50 and 50-100 keV energy bands, as well as light-curves binned over 100 s in the 17-30 keV band for sources of interest. Dedicated scripts to handle such vast datasets and results have been developed. We make the analysis tools to build such an archive publicly available. The whole database (raw data and products) enables an easy access to the hard X-ray long-term behaviour of a large sample of sources.

  6. Improving Federal Education Programs through an Integrated Performance and Benchmarking System.

    ERIC Educational Resources Information Center

    Department of Education, Washington, DC. Office of the Under Secretary.

    This document highlights the problems with current federal education program data collection activities and lists several factors that make movement toward a possible solution, then discusses the vision for the Integrated Performance and Benchmarking System (IPBS), a vision of an Internet-based system for harvesting information from states about…

  7. Integrated manufacturing approach to attain benchmark team performance

    NASA Astrophysics Data System (ADS)

    Chen, Shau-Ron; Nguyen, Andrew; Naguib, Hussein

    1994-09-01

    A Self-Directed Work Team (SDWT) was developed to transfer a polyimide process module from the research laboratory to our wafer fab facility for applications in IC specialty devices. The SDWT implemented processes and tools based on the integration of five manufacturing strategies for continuous improvement. These were: Leadership Through Quality (LTQ), Total Productive Maintenance (TMP), Cycle Time Management (CTM), Activity-Based Costing (ABC), and Total Employee Involvement (TEI). Utilizing these management techniques simultaneously, the team achieved six sigma control of all critical parameters, increased Overall Equipment Effectiveness (OEE) from 20% to 90%, reduced cycle time by 95%, cut polyimide manufacturing cost by 70%, and improved its overall team member skill level by 33%.

  8. Benchmarking the Integration of WAVEWATCH III Results into HAZUS-MH: Preliminary Results

    NASA Technical Reports Server (NTRS)

    Berglund, Judith; Holland, Donald; McKellip, Rodney; Sciaudone, Jeff; Vickery, Peter; Wang, Zhanxian; Ying, Ken

    2005-01-01

    The report summarizes the results from the preliminary benchmarking activities associated with the use of WAVEWATCH III (WW3) results in the HAZUS-MH MR1 flood module. Project partner Applied Research Associates (ARA) is integrating the WW3 model into HAZUS. The current version of HAZUS-MH predicts loss estimates from hurricane-related coastal flooding by using values of surge only. Using WW3, wave setup can be included with surge. Loss estimates resulting from the use of surge-only and surge-plus-wave-setup were compared. This benchmarking study is preliminary because the HAZUS-MH MR1 flood module was under development at the time of the study. In addition, WW3 is not scheduled to be fully integrated with HAZUS-MH and available for public release until 2008.

  9. Integral Data Benchmark of HENDL2.0/MG Compared with Neutronics Shielding Experiments

    NASA Astrophysics Data System (ADS)

    Jiang, Jieqiong; Xu, Dezheng; Zheng, Shanliang; He, Zhaozhong; Hu, Yanglin; Li, Jingjing; Zou, Jun; Zeng, Qin; Chen, Mingliang; Wang, Minghuang

    2009-10-01

    HENDL2.0, the latest version of the hybrid evaluated nuclear data library, was developed based upon some evaluated data from FENDL2.1 and ENDF/B-VII. To qualify and validate the working library, an integral test for the neutron production data of HENDL2.0 was performed with a series of existing spherical shell benchmark experiments (such as V, Be, Fe, Pb, Cr, Mn, Cu, Al, Si, Co, Zr, Nb, Mo, W and Ti). These experiments were simulated numerically using HENDL2.0/MG and a home-developed code VisualBUS. Calculations were conducted with both FENDL2.1/MG and FENDL2.1/MC, which are based on a continuous-energy Monte Carlo Code MCNP/4C. By comparison and analysis of the neutron leakage spectra and the integral test, benchmark results of neutron production data are presented in this paper.

  10. PH5 for integrating and archiving different data types

    NASA Astrophysics Data System (ADS)

    Azevedo, Steve; Hess, Derick; Beaudoin, Bruce

    2016-04-01

    PH5 is IRIS PASSCAL's file organization of HDF5 used for seismic data. The extensibility and portability of HDF5 allows the PH5 format to evolve and operate on a variety of platforms and interfaces. To make PH5 even more flexible, the seismic metadata is separated from the time series data in order to achieve gains in performance as well as ease of use and to simplify user interaction. This separation affords easy updates to metadata after the data are archived without having to access waveform data. To date, PH5 is currently used for integrating and archiving active source, passive source, and onshore-offshore seismic data sets with the IRIS Data Management Center (DMC). Active development to make PH5 fully compatible with FDSN web services and deliver StationXML is near completion. We are also exploring the feasibility of utilizing QuakeML for active seismic source representation. The PH5 software suite, PIC KITCHEN, comprises in-field tools that include data ingestion (e.g. RefTek format, SEG-Y, and SEG-D), meta-data management tools including QC, and a waveform review tool. These tools enable building archive ready data in-field during active source experiments greatly decreasing the time to produce research ready data sets. Once archived, our online request page generates a unique web form and pre-populates much of it based on the metadata provided to it from the PH5 file. The data requester then can intuitively select the extraction parameters as well as data subsets they wish to receive (current output formats include SEG-Y, SAC, mseed). The web interface then passes this on to the PH5 processing tools to generate the requested seismic data, and e-mail the requester a link to the data set automatically as soon as the data are ready. PH5 file organization was originally designed to hold seismic time series data and meta-data from controlled source experiments using RefTek data loggers. The flexibility of HDF5 has enabled us to extend the use of PH5 in several

  11. Solution of the WFNDEC 2015 eddy current benchmark with surface integral equation method

    NASA Astrophysics Data System (ADS)

    Demaldent, Edouard; Miorelli, Roberto; Reboud, Christophe; Theodoulidis, Theodoros

    2016-02-01

    In this paper, a numerical solution of WFNDEC 2015 eddy current benchmark is presented. In particular, the Surface Integral Equation (SIE) method has been employed for numerically solving the benchmark problem. The SIE method represent an effective and efficient alternative to standard numerical solver like Finite Element Method (FEM) when electromagnetic fields need to be calculated in problems involving homogeneous media. The formulation of SIE method allows to properly solve the electromagnetic problem by meshing the surface of the media instead to the complete media volume as done in FEM. The surface meshing enables to describe the problem with a smaller number of unknowns with respect to FEM. This property is directly translated in an obvious gain in terms of CPU time efficiency.

  12. Acceptance testing of integrated picture archiving and communications systems.

    PubMed

    Lewis, T E; Horton, M C; Kinsey, T V; Shelton, P D

    1999-05-01

    An integrated picture archiving and communication system (PACS) is a large investment in both money and resources. With all of the components and systems contained in the PACS, a methodical set of protocols and procedures must be developed to test all aspects of the PACS within the short time allocated for contract compliance. For the Department of Defense (DoD), acceptance testing (AT) sets the protocols and procedures. Broken down into modules and test procedures that group like components and systems, the AT protocol maximizes the efficiency and thoroughness of testing all aspects of an integrated PACS. A standardized and methodical protocol reduces the probability of functionality or performance limitations being overlooked. The AT protocol allows complete PACS testing within the 30 days allocated by the digital imaging network (DIN)-PACS contract. AT shortcomings identified during the testing phase properly allows for resolution before complete acceptance of the system. This presentation will describe the evolution of the process, the components of the DoD AT protocol, the benefits of the AT process, and its significance to the successful implementation of a PACS. This is a US government work. There are no restrictions on its use. PMID:10342200

  13. An Integrative Approach to Archival Outreach: A Case Study of Becoming Part of the Constituents' Community

    ERIC Educational Resources Information Center

    Rettig, Patricia J.

    2007-01-01

    Archival outreach, an essential activity for any repository, should focus on what constituents are already doing and capitalize on existing venues related to the repository's subject area. The Water Resources Archive at Colorado State University successfully undertook this integrative approach to outreach. Detailed in the article are outreach…

  14. Progress of Integral Experiments in Benchmark Fission Assemblies for a Blanket of Hybrid Reactor

    NASA Astrophysics Data System (ADS)

    Liu, R.; Zhu, T. H.; Yan, X. S.; Lu, X. X.; Jiang, L.; Wang, M.; Han, Z. J.; Wen, Z. W.; Lin, J. F.; Yang, Y. W.

    2014-04-01

    This article describes recent progress in integral neutronics experiments in benchmark fission assemblies for the blanket design in a hybrid reactor. The spherical assemblies consist of three layers of depleted uranium shells and several layers of polyethylene shells, separately. In the assemblies with centralizing the D-T neutron source, the plutonium production rates, uranium fission rates and leakage neutron spectra are measured. The measured results are compared to the calculated ones with the MCNP-4B code and ENDF/B-VI library data, available.

  15. Integration experiences and performance studies of A COTS parallel archive systems

    SciTech Connect

    Chen, Hsing-bung; Scott, Cody; Grider, Bary; Torres, Aaron; Turley, Milton; Sanchez, Kathy; Bremer, John

    2010-01-01

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future

  16. Integration experiments and performance studies of a COTS parallel archive system

    SciTech Connect

    Chen, Hsing-bung; Scott, Cody; Grider, Gary; Torres, Aaron; Turley, Milton; Sanchez, Kathy; Bremer, John

    2010-06-16

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements of

  17. Picture Archiving and Communication System (PACS) implementation, integration & benefits in an integrated health system.

    PubMed

    Mansoori, Bahar; Erhard, Karen K; Sunshine, Jeffrey L

    2012-02-01

    The availability of the Picture Archiving and Communication System (PACS) has revolutionized the practice of radiology in the past two decades and has shown to eventually increase productivity in radiology and medicine. PACS implementation and integration may bring along numerous unexpected issues, particularly in a large-scale enterprise. To achieve a successful PACS implementation, identifying the critical success and failure factors is essential. This article provides an overview of the process of implementing and integrating PACS in a comprehensive health system comprising an academic core hospital and numerous community hospitals. Important issues are addressed, touching all stages from planning to operation and training. The impact of an enterprise-wide radiology information system and PACS at the academic medical center (four specialty hospitals), in six additional community hospitals, and in all associated outpatient clinics as well as the implications on the productivity and efficiency of the entire enterprise are presented. PMID:22212425

  18. Integrating CFD, CAA, and Experiments Towards Benchmark Datasets for Airframe Noise Problems

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan M.; Yamamoto, Kazuomi

    2012-01-01

    Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. The combination of geometric complexity, high Reynolds number turbulence, multiple regions of separation, and a strong coupling with adjacent physical components makes the problem of airframe noise highly challenging. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate state of the art computational fluid dynamics, computational aeroacoustics, and in depth, holistic, and multifacility measurements targeting a selected set of canonical yet realistic configurations. This paper provides a brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far.

  19. Integral Reactor Physics Benchmarks - the International Criticality Safety Benchmark Evaluation Project (icsbep) and the International Reactor Physics Experiment Evaluation Project (irphep)

    NASA Astrophysics Data System (ADS)

    Briggs, J. Blair; Nigg, David W.; Sartori, Enrico

    2006-04-01

    Since the beginning of the nuclear industry, thousands of integral experiments related to reactor physics and criticality safety have been performed. Many of these experiments can be used as benchmarks for validation of calculational techniques and improvements to nuclear data. However, many were performed in direct support of operations and thus were not performed with a high degree of quality assurance and were not well documented. For years, common validation practice included the tedious process of researching integral experiment data scattered throughout journals, transactions, reports, and logbooks. Two projects have been established to help streamline the validation process and preserve valuable integral data: the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP). The two projects are closely coordinated to avoid duplication of effort and to leverage limited resources to achieve a common goal. A short history of these two projects and their common purpose are discussed in this paper. Accomplishments of the ICSBEP are highlighted and the future of the two projects outlined.

  20. ‘Wasteaware’ benchmark indicators for integrated sustainable waste management in cities

    SciTech Connect

    Wilson, David C.; Rodic, Ljiljana; Cowing, Michael J.; Velis, Costas A.; Whiteman, Andrew D.; Scheinberg, Anne; Vilches, Recaredo; Masterson, Darragh; Stretz, Joachim; Oelz, Barbara

    2015-01-15

    Highlights: • Solid waste management (SWM) is a key utility service, but data is often lacking. • Measuring their SWM performance helps a city establish priorities for action. • The Wasteaware benchmark indicators: measure both technical and governance aspects. • Have been developed over 5 years and tested in more than 50 cities on 6 continents. • Enable consistent comparison between cities and countries and monitoring progress. - Abstract: This paper addresses a major problem in international solid waste management, which is twofold: a lack of data, and a lack of consistent data to allow comparison between cities. The paper presents an indicator set for integrated sustainable waste management (ISWM) in cities both North and South, to allow benchmarking of a city’s performance, comparing cities and monitoring developments over time. It builds on pioneering work for UN-Habitat’s solid waste management in the World’s cities. The comprehensive analytical framework of a city’s solid waste management system is divided into two overlapping ‘triangles’ – one comprising the three physical components, i.e. collection, recycling, and disposal, and the other comprising three governance aspects, i.e. inclusivity; financial sustainability; and sound institutions and proactive policies. The indicator set includes essential quantitative indicators as well as qualitative composite indicators. This updated and revised ‘Wasteaware’ set of ISWM benchmark indicators is the cumulative result of testing various prototypes in more than 50 cities around the world. This experience confirms the utility of indicators in allowing comprehensive performance measurement and comparison of both ‘hard’ physical components and ‘soft’ governance aspects; and in prioritising ‘next steps’ in developing a city’s solid waste management system, by identifying both local strengths that can be built on and weak points to be addressed. The Wasteaware ISWM indicators

  1. Fault tolerance techniques to assure data integrity in high-volume PACS image archives

    NASA Astrophysics Data System (ADS)

    He, Yutao; Huang, Lu J.; Valentino, Daniel J.; Wingate, W. Keith; Avizienis, Algirdas

    1995-05-01

    Picture archiving and communication systems (PACS) perform the systematic acquisition, archiving, and presentation of large quantities of radiological image and text data. In the UCLA Radiology PACS, for example, the volume of image data archived currently exceeds 2500 gigabytes. Furthermore, the distributed heterogeneous PACS is expected to have near real-time response, be continuously available, and assure the integrity and privacy of patient data. The off-the-shelf subsystems that compose the current PACS cannot meet these expectations; therefore fault tolerance techniques had to be incorporated into the system. This paper is to report our first-step efforts towards the goal and is organized as follows: First we discuss data integrity and identify fault classes under the PACS operational environment, then we describe auditing and accounting schemes developed for error-detection and analyze operational data collected. Finally, we outline plans for future research.

  2. MAO NAS of Ukraine Plate Archives: Towards the WFPDB Integration

    NASA Astrophysics Data System (ADS)

    Sergeeva, T. P.; Golovnya, V. V.; Yizhakevych, E. M.; Kizyun, L. N.; Pakuliak, L. K.; Shatokhina, S. V.; Tsvetkov, M. K.; Tsvetkova, K. P.; Sergeev, A. V.

    2006-04-01

    The plate archives of the Main Astronomical Observatory (Golosyiv, Kyiv) includes about 85 000 plates which were taken for various astronomical projects in the period 1950-2005. Among them there are more than 60 000 plates containing stellar, planetary and active solar formations spectra and more than 20 000 of direct northern sky area plates (mostly with wide-field). The catalogues of these direct wide-field plates have been prepared in computer-readable form. Now they are reduced in the WFPDB format and included into the database.

  3. Students Teaching Texts to Students: Integrating LdL and Digital Archives

    ERIC Educational Resources Information Center

    Stymeist, David

    2015-01-01

    The arrival of the digital age has not only reshaped and refocused critical research in the humanities, but has provided real opportunities to innovate with pedagogy and classroom structure. This article describes the development of a new pedagogical model that integrates learning by teaching with student access to electronic archival resources.…

  4. Neutron Cross Section Processing Methods for Improved Integral Benchmarking of Unresolved Resonance Region Evaluations

    NASA Astrophysics Data System (ADS)

    Walsh, Jonathan A.; Forget, Benoit; Smith, Kord S.; Brown, Forrest B.

    2016-03-01

    In this work we describe the development and application of computational methods for processing neutron cross section data in the unresolved resonance region (URR). These methods are integrated with a continuous-energy Monte Carlo neutron transport code, thereby enabling their use in high-fidelity analyses. Enhanced understanding of the effects of URR evaluation representations on calculated results is then obtained through utilization of the methods in Monte Carlo integral benchmark simulations of fast spectrum critical assemblies. First, we present a so-called on-the-fly (OTF) method for calculating and Doppler broadening URR cross sections. This method proceeds directly from ENDF-6 average unresolved resonance parameters and, thus, eliminates any need for a probability table generation pre-processing step in which tables are constructed at several energies for all desired temperatures. Significant memory reduction may be realized with the OTF method relative to a probability table treatment if many temperatures are needed. Next, we examine the effects of using a multi-level resonance formalism for resonance reconstruction in the URR. A comparison of results obtained by using the same stochastically-generated realization of resonance parameters in both the single-level Breit-Wigner (SLBW) and multi-level Breit-Wigner (MLBW) formalisms allows for the quantification of level-level interference effects on integrated tallies such as keff and energy group reaction rates. Though, as is well-known, cross section values at any given incident energy may differ significantly between single-level and multi-level formulations, the observed effects on integral results are minimal in this investigation. Finally, we demonstrate the calculation of true expected values, and the statistical spread of those values, through independent Monte Carlo simulations, each using an independent realization of URR cross section structure throughout. It is observed that both probability table

  5. Water level ingest, archive and processing system - an integral part of NOAA's tsunami database

    NASA Astrophysics Data System (ADS)

    McLean, S. J.; Mungov, G.; Dunbar, P. K.; Price, D. J.; Mccullough, H.

    2013-12-01

    The National Oceanic and Atmospheric Administration (NOAA), National Geophysical Data Center (NGDC) and collocated World Data Service for Geophysics (WDS) provides long-term archive, data management, and access to national and global tsunami data. Archive responsibilities include the NOAA Global Historical Tsunami event and runup database, damage photos, as well as other related hazards data. Beginning in 2008, NGDC was given the responsibility of archiving, processing and distributing all tsunami and hazards-related water level data collected from NOAA observational networks in a coordinated and consistent manner. These data include the Deep-ocean Assessment and Reporting of Tsunami (DART) data provided by the National Data Buoy Center (NDBC), coastal-tide-gauge data from the National Ocean Service (NOS) network and tide-gauge data from the two National Weather Service (NWS) Tsunami Warning Centers (TWCs) regional networks. Taken together, this integrated archive supports tsunami forecast, warning, research, mitigation and education efforts of NOAA and the Nation. Due to the variety of the water level data, the automatic ingest system was redesigned, along with upgrading the inventory, archive and delivery capabilities based on modern digital data archiving practices. The data processing system was also upgraded and redesigned focusing on data quality assessment in an operational manner. This poster focuses on data availability highlighting the automation of all steps of data ingest, archive, processing and distribution. Examples are given from recent events such as the October 2012 hurricane Sandy, the Feb 06, 2013 Solomon Islands tsunami, and the June 13, 2013 meteotsunami along the U.S. East Coast.

  6. Picture archiving and communications systems for integrated healthcare information solutions

    NASA Astrophysics Data System (ADS)

    Goldburgh, Mitchell M.; Glicksman, Robert A.; Wilson, Dennis L.

    1997-05-01

    The rapid and dramatic shifts within the US healthcare industry have created unprecedented needs to implement changes in the delivery systems. These changes must not only address the access to healthcare, but the costs of delivery, and outcomes reporting. The resulting vision to address these needs has been called the Integrated Healthcare Solution whose core is the Electronic Patient Record. The integration of information by itself is not the issue, nor will it address the challenges in front of the healthcare providers. The process and business of healthcare delivery must adopt, apply and expand its use of technology which can assist in re-engineering the tools for healthcare. Imaging is becoming a larger part of the practice of healthcare both as a recorder of health status and as a defensive record for gatekeepers of healthcare. It is thus imperative that imaging specialists adopt technology which competitively integrates them into the process, reduces the risk, and positively effects the outcome.

  7. An integrated picture archiving and communications system-radiology information system in a radiological department.

    PubMed

    Wiltgen, M; Gell, G; Graif, E; Stubler, S; Kainz, A; Pitzler, R

    1993-02-01

    In this report we present an integrated picture archiving and communication system (PACS)--radiology information system (RIS) which runs as part of the daily routine in the Department of Radiology at the University of Graz. Although the PACS and the RIS have been developed independently, the two systems are interfaced to ensure a unified and consistent long-term archive. The configuration connects four computer tomography scanners (one of them situated at a distance of 1 km), a magnetic resonance imaging scanner, a digital subtraction angiography unit, an evaluation console, a diagnostic console, an image display console, an archive with two optical disk drives, and several RIS terminals. The configuration allows the routine archiving of all examinations on optical disks independent of reporting. The management of the optical disks is performed by the RIS. Images can be selected for retrieval via the RIS by using patient identification or medical criteria. A special software process (PACS-MONITOR) enables the user to survey and manage image communication, archiving, and retrieval as well as to get information about the status of the system at any time and handle the different procedures in the PACS. The system is active 24 hours a day. To make the PACS operation as independent as possible from the permanent presence of a system manager (electronic data processing expert), a rule-based expert system (OPERAS; OPERating ASsistant) is in use to localize and eliminate malfunctions that occur during routine work. The PACS-RIS reduces labor and speeds access to images within radiology and clinical departments. PMID:8439578

  8. Impedance spectroscopy for detection of mold in archives with an integrated reference measurement

    NASA Astrophysics Data System (ADS)

    Papireddy Vinayaka, P.; Van Den Driesche, S.; Janssen, S.; Frodl, M.; Blank, R.; Cipriani, F.; Lang, W.; Vellekoop, M. J.

    2015-06-01

    In this work, we present a new miniaturized culture medium based sensor system where we apply an optical reference in an impedance measurement approach for the detection of mold in archives. The designed sensor comprises a chamber with pre-loaded culture medium which promotes the growth of archive mold species. Growth of mold is detected by measuring changes in the impedance of the culture medium caused due to increase in the pH (from 5.5 to 8) with integrated electrodes. Integration of the reference measurement helps in determining the sensitivity of the sensor. The colorimetric principle serves as a reference measurement that indicates a pH change after which further pH shifts can be determined using impedance measurement. In this context, some of the major archive mold species Eurotium amstelodami, Aspergillus penicillioides and Aspergillus restrictus have been successfully analyzed on-chip. Growth of Eurotium amstelodami shows a proportional impedance change of 10 % (12 chips tested) per day, with a sensitivity of 0.6 kΩ/pH unit.

  9. Land cover data from Landsat single-date archive imagery: an integrated classification approach

    NASA Astrophysics Data System (ADS)

    Bajocco, Sofia; Ceccarelli, Tomaso; Rinaldo, Simone; De Angelis, Antonella; Salvati, Luca; Perini, Luigi

    2012-10-01

    The analysis of land cover dynamics provides insight into many environmental problems. However, there are few data sources which can be used to derive consistent time series, remote sensing being one of the most valuable ones. Due to their multi-temporal and spatial coverage needs, such analysis is usually based on large land cover datasets, which requires automated, objective and repeatable procedures. The USGS Landsat archives provide free access to multispectral, high-resolution remotely sensed data starting from the mid-eighties; in many cases, however, only single date images are available. This paper suggests an objective approach for generating land cover information from 30m resolution and single date Landsat archive satellite imagery. A procedure was developed integrating pixel-based and object-oriented classifiers, which consists of the following basic steps: i) pre-processing of the satellite image, including radiance and reflectance calibration, texture analysis and derivation of vegetation indices, ii) segmentation of the pre-processed image, iii) its classification integrating both radiometric and textural properties. The integrated procedure was tested for an area in Sardinia Region, Italy, and compared with a purely pixel-based one. Results demonstrated that a better overall accuracy, evaluated against the available land cover cartography, was obtained with the integrated (86%) compared to the pixel-based classification (68%) at the first CORINE Land Cover level. The proposed methodology needs to be further tested for evaluating its trasferability in time (constructing comparable land cover time series) and space (for covering larger areas).

  10. An XMM-Newton Science Archive for next decade, and its integration into ESASky

    NASA Astrophysics Data System (ADS)

    Loiseau, N.; Baines, D.; Rodriguez, P.; Salgado, J.; Sarmiento, M.; Colomo, E.; Merin, B.; Giordano, F.; Racero, E.; Migliari, S.

    2016-06-01

    We will present a roadmap for the next decade improvements of the XMM-Newton Science Archive (XSA), as planned for an always faster and more user friendly access to all XMM-Newton data. This plan includes the integration of the Upper Limit server, an interactive visualization of EPIC and RGS spectra, on-the-fly data analysis, among other advanced features. Within this philosophy XSA is also being integrated into ESASky, the science-driven discovery portal for all the ESA Astronomy Missions. A first public beta release of the ESASky service has been already released at the end of 2015. It is currently featuring an interface for exploration of the multi-wavelength sky and for single and/or multiple target searches of science-ready data. The system offers progressive multi-resolution all-sky projections of full mission datasets using a new generation of HEALPix projections called HiPS, developed at the CDS; detailed geometrical footprints to connect the all-sky mosaics to individual observations; and direct access to science-ready data at the underlying mission-specific science archives. New XMM-Newton EPIC and OM all-sky HiPS maps, catalogues and links to the observations are available through ESASky, together with INTEGRAL, HST, Herschel, Planck and other future data.

  11. Three Dimensional, Integrated Characterization and Archival System for Remote Facility Contaminant Characterization

    SciTech Connect

    Barry, R.E.; Gallman, P.; Jarvis, G.; Griffiths, P.

    1999-04-25

    The largest problem facing the Department of Energy's Office of Environmental Management (EM) is the cleanup of the Cold War legacy nuclear production plants that were built and operated from the mid-forties through the late eighties. EM is now responsible for the remediation of no less than 353 projects at 53 sites across the country at, an estimated cost of $147 billion over the next 72 years. One of the keys to accomplishing a thorough cleanup of any site is a rigorous but quick contaminant characterization capability. If the contaminants present in a facility can be mapped accurately, the cleanup can proceed with surgical precision, using appropriate techniques for each contaminant type and location. The three dimensional, integrated characterization and archival system (3D-ICAS) was developed for the purpose of rapid, field level identification, mapping, and archiving of contaminant data. The system consists of three subsystems, an integrated work and operating station, a 3-D coherent laser radar, and a contaminant analysis unit. Target contaminants that can be identified include chemical (currently organic only), radiological, and base materials (asbestos). In operation, two steps are required. First, the remotely operable 3-D laser radar maps an area of interest in the spatial domain. Second, the remotely operable contaminant analysis unit maps the area of interest in the chemical, radiological, and base material domains. The resultant information is formatted for display and archived using an integrated workstation. A 3-D model of the merged spatial and contaminant domains cart be displayed along with a color-coded contaminant tag at each analysis point. In addition, all of the supporting detailed data are archived for subsequent QC checks. The 3D-ICAS system is capable of performing all contaminant characterization in a dwell time of 6 seconds. The radiological and chemical sensors operate at US Environmental Protection Agency regulatory levels. Base

  12. COG validation: SINBAD Benchmark Problems

    SciTech Connect

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  13. Supporting users through integrated retrieval, processing, and distribution systems at the land processes distributed active archive center

    USGS Publications Warehouse

    Kalvelage, T.; Willems, Jennifer

    2003-01-01

    The design of the EOS Data and Information Systems (EOSDIS) to acquire, archive, manage and distribute Earth observation data to the broadest possible user community was discussed. A number of several integrated retrieval, processing and distribution capabilities have been explained. The value of these functions to the users were described and potential future improvements were laid out for the users. The users were interested in acquiring the retrieval, processing and archiving systems integrated so that they can get the data they want in the format and delivery mechanism of their choice.

  14. Evaluation for 4S core nuclear design method through integration of benchmark data

    SciTech Connect

    Nagata, A.; Tsuboi, Y.; Moriki, Y.; Kawashima, M.

    2012-07-01

    The 4S is a sodium-cooled small fast reactor which is reflector-controlled for operation through core lifetime about 30 years. The nuclear design method has been selected to treat neutron leakage with high accuracy. It consists of a continuous-energy Monte Carlo code, discrete ordinate transport codes and JENDL-3.3. These two types of neutronic analysis codes are used for the design in a complementary manner. The accuracy of the codes has been evaluated by analysis of benchmark critical experiments and the experimental reactor data. The measured data used for the evaluation is critical experimental data of the FCA XXIII as a physics mockup assembly of the 4S core, FCA XVI, FCA XIX, ZPR, and data of experimental reactor JOYO MK-1. Evaluated characteristics are criticality, reflector reactivity worth, power distribution, absorber reactivity worth, and sodium void worth. A multi-component bias method was applied, especially to improve the accuracy of sodium void reactivity worth. As the result, it has been confirmed that the 4S core nuclear design method provides good accuracy, and typical bias factors and their uncertainties are determined. (authors)

  15. Benchmarking the OLGA lower-hybrid full-wave code for a future integration with ALOHA

    NASA Astrophysics Data System (ADS)

    Preinhaelter, J.; Hillairet, J.; Urban, J.

    2014-02-01

    The ALOHA [1] code is frequently used as a standard to solve the coupling of lower hybrid grills to the plasma. To remove its limitations on the linear density profile, homogeneous magnetic field and the fully decoupled fast and slow waves in the determination of the plasma surface admittance, we exploit the recently developed efficient full wave code OLGA [2]. There is simple connection between these two codes, namely, the plasma surface admittances used in ALOHA-2D can be expressed as the slowly varying parts of the coupling element integrands in OLGA and the ALOHA coupling elements are then linear combinations of OLGA coupling elements. We developed AOLGA module (subset of OLGA) for ALOHA. An extensive benchmark has been performed. ALOHA admittances differ from AOLGA results mainly for N∥in the inaccessible region but the coupling elements differ only slightly. We compare OLGA and ALOHA for a simple 10-waveguide grill operating at 3.7 GHz and the linear density profile as it is used in ALOHA. Hence we can detect pure effects of fast and slow waves coupling on grill efficiency. The effects are weak for parameters near the optimum coupling and confirm the ALOHA results validity. We also compare the effect of the plasma surface density and the density gradient on the grill coupling determined by OLGA and ALOHA.

  16. Laser produced plasma sources for nanolithography—Recent integrated simulation and benchmarking

    SciTech Connect

    Hassanein, A.; Sizyuk, T.

    2013-05-15

    Photon sources for extreme ultraviolet lithography (EUVL) are still facing challenging problems to achieve high volume manufacturing in the semiconductor industry. The requirements for high EUV power, longer optical system and components lifetime, and efficient mechanisms for target delivery have narrowed investigators towards the development and optimization of dual-pulse laser sources with high repetition rate of small liquid tin droplets and the use of multi-layer mirror optical system for collecting EUV photons. We comprehensively simulated laser-produced plasma sources in full 3D configuration using 10–50 μm tin droplet targets as single droplets as well as, for the first time, distributed fragmented microdroplets with equivalent mass. The latter is to examine the effects of droplet fragmentation resulting from the first pulse and prior to the incident second main laser pulse. We studied the dependence of target mass and size, laser parameters, and dual pulse system configuration on EUV radiation output and on atomic and ionic debris generation. Our modeling and simulation included all phases of laser target evolution: from laser/droplet interaction, energy deposition, target vaporization, ionization, plasma hydrodynamic expansion, thermal and radiation energy redistribution, and EUV photons collection as well as detail mapping of photons source size and location. We also simulated and predicted the potential damage to the optical mirror collection system from plasma thermal and energetic debris and the requirements for mitigating systems to reduce debris fluence. The debris effect on mirror collection system is analyzed using our three-dimensional ITMC-DYN Monte Carlo package. Modeling results were benchmarked against our CMUXE laboratory experimental studies for the EUV photons production and for debris and ions generation.

  17. Benchmarking HRD.

    ERIC Educational Resources Information Center

    Ford, Donald J.

    1993-01-01

    Discusses benchmarking, the continuous process of measuring one's products, services, and practices against those recognized as leaders in that field to identify areas for improvement. Examines ways in which benchmarking can benefit human resources functions. (JOW)

  18. Three-Dimensional Integrated Characterization and Archiving System (3D-ICAS). Phase 1

    SciTech Connect

    1994-07-01

    3D-ICAS is being developed to support Decontamination and Decommissioning operations for DOE addressing Research Area 6 (characterization) of the Program Research and Development Announcement. 3D-ICAS provides in-situ 3-dimensional characterization of contaminated DOE facilities. Its multisensor probe contains a GC/MS (gas chromatography/mass spectrometry using noncontact infrared heating) sensor for organics, a molecular vibrational sensor for base material identification, and a radionuclide sensor for radioactive contaminants. It will provide real-time quantitative measurements of volatile organics and radionuclides on bare materials (concrete, asbestos, transite); it will provide 3-D display of the fusion of all measurements; and it will archive the measurements for regulatory documentation. It consists of two robotic mobile platforms that operate in hazardous environments linked to an integrated workstation in a safe environment.

  19. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we

  20. THREE DIMENSIONAL INTEGRATED CHARACTERIZATION AND ARCHIVING SYSTEM (3D-ICAS)

    SciTech Connect

    George Jarvis

    2001-06-18

    The overall objective of this project is to develop an integrated system that remotely characterizes, maps, and archives measurement data of hazardous decontamination and decommissioning (D&D) areas. The system will generate a detailed 3-dimensional topography of the area as well as real-time quantitative measurements of volatile organics and radionuclides. The system will analyze substrate materials consisting of concrete, asbestos, and transite. The system will permanently archive the data measurements for regulatory and data integrity documentation. Exposure limits, rest breaks, and donning and removal of protective garments generate waste in the form of contaminated protective garments and equipment. Survey times are increased and handling and transporting potentially hazardous materials incur additional costs. Off-site laboratory analysis is expensive and time-consuming, often necessitating delay of further activities until results are received. The Three Dimensional Integrated Characterization and Archiving System (3D-ICAS) has been developed to alleviate some of these problems. 3D-ICAS provides a flexible system for physical, chemical and nuclear measurements reduces costs and improves data quality. Operationally, 3D-ICAS performs real-time determinations of hazardous and toxic contamination. A prototype demonstration unit is available for use in early 2000. The tasks in this Phase included: (1) Mobility Platforms: Integrate hardware onto mobility platforms, upgrade surface sensors, develop unit operations and protocol. (2) System Developments: Evaluate metals detection capability using x-ray fluorescence technology. (3) IWOS Upgrades: Upgrade the IWOS software and hardware for compatibility with mobility platform. The system was modified, tested and debugged during 1999 and 2000. The 3D-ICAS was shipped on 11 May 2001 to FIU-HCET for demonstration and validation of the design modifications. These modifications included simplifying the design from a two

  1. Benchmarking expert system tools

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  2. Identification of Integral Benchmarks for Nuclear Data Testing Using DICE (Database for the International Handbook of Evaluated Criticality Safety Benchmark Experiments)

    SciTech Connect

    J. Blair Briggs; A. Nichole Ellis; Yolanda Rugama; Nicolas Soppera; Manuel Bossant

    2011-08-01

    Typical users of the International Criticality Safety Evaluation Project (ICSBEP) Handbook have specific criteria to which they desire to find matching experiments. Depending on the application, those criteria may consist of any combination of physical or chemical characteristics and/or various neutronic parameters. The ICSBEP Handbook contains a structured format helping the user narrow the search for experiments of interest. However, with nearly 4300 different experimental configurations and the ever increasing addition of experimental data, the necessity to perform multiple criteria searches have rendered these features insufficient. As a result, a relational database was created with information extracted from the ICSBEP Handbook. A users’ interface was designed by OECD and DOE to allow the interrogation of this database. The database and the corresponding users’ interface are referred to as DICE. DICE currently offers the capability to perform multiple criteria searches that go beyond simple fuel, physical form and spectra and includes expanded general information, fuel form, moderator/coolant, neutron-absorbing material, cladding, reflector, separator, geometry, benchmark results, spectra, and neutron balance parameters. DICE also includes the capability to display graphical representations of neutron spectra, detailed neutron balance, sensitivity coefficients for capture, fission, elastic scattering, inelastic scattering, nu-bar and mu-bar, as well as several other features.

  3. The cognitive processing of politics and politicians: archival studies of conceptual and integrative complexity.

    PubMed

    Suedfeld, Peter

    2010-12-01

    This article reviews over 30 years of research on the role of integrative complexity (IC) in politics. IC is a measure of the cognitive structure underlying information processing and decision making in a specific situation and time of interest to the researcher or policymaker. As such, it is a state counterpart of conceptual complexity, the trait (transsituationally and transtemporally stable) component of cognitive structure. In the beginning (the first article using the measure was published in 1976), most of the studies were by the author or his students (or both), notably Philip Tetlock; more recently, IC has attracted the attention of a growing number of political and social psychologists. The article traces the theoretical development of IC; describes how the variable is scored in archival or contemporary materials (speeches, interviews, memoirs, etc.); discusses possible influences on IC, such as stress, ideology, and official role; and presents findings on how measures of IC can be used to forecast political decisions (e.g., deciding between war and peace). Research on the role of IC in individual success and failure in military and political leaders is also described. PMID:21039528

  4. Virtual Globes and Glacier Research: Integrating research, collaboration, logistics, data archival, and outreach into a single tool

    NASA Astrophysics Data System (ADS)

    Nolan, M.

    2006-12-01

    Virtual Globes are a paradigm shift in the way earth sciences are conducted. With these tools, nearly all aspects of earth science can be integrated from field science, to remote sensing, to remote collaborations, to logistical planning, to data archival/retrieval, to PDF paper retriebal, to education and outreach. Here we present an example of how VGs can be fully exploited for field sciences, using research at McCall Glacier, in Arctic Alaska.

  5. Modelling anaerobic co-digestion in Benchmark Simulation Model No. 2: Parameter estimation, substrate characterisation and plant-wide integration.

    PubMed

    Arnell, Magnus; Astals, Sergi; Åmand, Linda; Batstone, Damien J; Jensen, Paul D; Jeppsson, Ulf

    2016-07-01

    Anaerobic co-digestion is an emerging practice at wastewater treatment plants (WWTPs) to improve the energy balance and integrate waste management. Modelling of co-digestion in a plant-wide WWTP model is a powerful tool to assess the impact of co-substrate selection and dose strategy on digester performance and plant-wide effects. A feasible procedure to characterise and fractionate co-substrates COD for the Benchmark Simulation Model No. 2 (BSM2) was developed. This procedure is also applicable for the Anaerobic Digestion Model No. 1 (ADM1). Long chain fatty acid inhibition was included in the ADM1 model to allow for realistic modelling of lipid rich co-substrates. Sensitivity analysis revealed that, apart from the biodegradable fraction of COD, protein and lipid fractions are the most important fractions for methane production and digester stability, with at least two major failure modes identified through principal component analysis (PCA). The model and procedure were tested on bio-methane potential (BMP) tests on three substrates, each rich on carbohydrates, proteins or lipids with good predictive capability in all three cases. This model was then applied to a plant-wide simulation study which confirmed the positive effects of co-digestion on methane production and total operational cost. Simulations also revealed the importance of limiting the protein load to the anaerobic digester to avoid ammonia inhibition in the digester and overloading of the nitrogen removal processes in the water train. In contrast, the digester can treat relatively high loads of lipid rich substrates without prolonged disturbances. PMID:27088248

  6. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  7. DOE Integrated Safeguards and Security (DISS) historical document archival and retrieval analysis, requirements and recommendations

    SciTech Connect

    Guyer, H.B.; McChesney, C.A.

    1994-10-07

    The overall primary Objective of HDAR is to create a repository of historical personnel security documents and provide the functionality needed for archival and retrieval use by other software modules and application users of the DISS/ET system. The software product to be produced from this specification is the Historical Document Archival and Retrieval Subsystem The product will provide the functionality to capture, retrieve and manage documents currently contained in the personnel security folders in DOE Operations Offices vaults at various locations across the United States. The long-term plan for DISS/ET includes the requirement to allow for capture and storage of arbitrary, currently undefined, clearance-related documents that fall outside the scope of the ``cradle-to-grave`` electronic processing provided by DISS/ET. However, this requirement is not within the scope of the requirements specified in this document.

  8. Integrating nTMS Data into a Radiology Picture Archiving System.

    PubMed

    Mäkelä, Teemu; Vitikainen, Anne-Mari; Laakso, Aki; Mäkelä, Jyrki P

    2015-08-01

    Navigated transcranial magnetic stimulation (nTMS) is employed in eloquent brain area localization prior to intraoperative direct cortical electrical stimulations and neurosurgery. No commercial archiving or file transfer protocol existed for these studies. The aim of our project was to establish a standardized protocol for the transfer of nTMS results and medical assessments to the end users in pursuance of improving data security and facilitating presurgical planning. The existing infrastructure of the hospital's Radiology Department was used. Hospital information systems and networks were configured to allow communications and archiving of the study results, and in-house software was written for file manipulations and transfers. Graphical user interface with description suggestions and user-defined text legends enabled an easy and straightforward workflow for annotations and archiving of the results. The software and configurations were implemented and have been applied in studies of ten patients. The creation of the study protocol required the involvement of various professionals and interdepartmental cooperation. The introduction of the protocol has ended previously recurrent involvement of staff in the file transfer phase and improved cost-effectiveness. PMID:25617092

  9. An overview on integrated data system for archiving and sharing marine geology and geophysical data in Korea Institute of Ocean Science & Technology (KIOST)

    NASA Astrophysics Data System (ADS)

    Choi, Sang-Hwa; Kim, Sung Dae; Park, Hyuk Min; Lee, SeungHa

    2016-04-01

    We established and have operated an integrated data system for managing, archiving and sharing marine geology and geophysical data around Korea produced from various research projects and programs in Korea Institute of Ocean Science & Technology (KIOST). First of all, to keep the consistency of data system with continuous data updates, we set up standard operating procedures (SOPs) for data archiving, data processing and converting, data quality controls, and data uploading, DB maintenance, etc. Database of this system comprises two databases, ARCHIVE DB and GIS DB for the purpose of this data system. ARCHIVE DB stores archived data as an original forms and formats from data providers for data archive and GIS DB manages all other compilation, processed and reproduction data and information for data services and GIS application services. Relational data management system, Oracle 11g, adopted for DBMS and open source GIS techniques applied for GIS services such as OpenLayers for user interface, GeoServer for application server, PostGIS and PostgreSQL for GIS database. For the sake of convenient use of geophysical data in a SEG Y format, a viewer program was developed and embedded in this system. Users can search data through GIS user interface and save the results as a report.

  10. A Heterogeneous Medium Analytical Benchmark

    SciTech Connect

    Ganapol, B.D.

    1999-09-27

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results.

  11. Integrating the IA2 Astronomical Archive in the VO: The VO-Dance Engine

    NASA Astrophysics Data System (ADS)

    Molinaro, M.; Laurino, O.; Smareglia, R.

    2012-09-01

    Virtual Observatory (VO) protocols and standards are getting mature and the astronomical community asks for astrophysical data to be easily reachable. This means data centers have to intensify their efforts to provide the data they manage not only through proprietary portals and services but also through interoperable resources developed on the basis of the IVOA (International Virtual Observatory Alliance) recommendations. Here we present the work and ideas developed at the IA2 (Italian Astronomical Archive) data center hosted by the INAF-OATs (Italian Institute for Astrophysics - Trieste Astronomical Observatory) to reach this goal. The core point is the development of an application that from existing DB and archive structures can translate their content to VO compliant resources: VO-Dance (written in Java). This application, in turn, relies on a database (potentially DBMS independent) to store the translation layer information of each resource and auxiliary content (UCDs, field names, authorizations, policies, etc.). The last token is an administrative interface (currently developed using the Django python framework) to allow the data center administrators to set up and maintain resources. This deployment, platform independent, with database and administrative interface highly customizable, means the package, when stable and easily distributable, can be also used by single astronomers or groups to set up their own resources from their public datasets.

  12. Performance benchmarking of liver CT image segmentation and volume estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Wei; Zhou, Jiayin; Tian, Qi; Liu, Jimmy J.; Qi, Yingyi; Leow, Wee Kheng; Han, Thazin; Wang, Shih-chang

    2008-03-01

    In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs, including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built available to medical imaging research community for performance benchmarking of liver segmentation algorithms.

  13. Harmonising and linking biomedical and clinical data across disparate data archives to enable integrative cross-biobank research

    PubMed Central

    Spjuth, Ola; Krestyaninova, Maria; Hastings, Janna; Shen, Huei-Yi; Heikkinen, Jani; Waldenberger, Melanie; Langhammer, Arnulf; Ladenvall, Claes; Esko, Tõnu; Persson, Mats-Åke; Heggland, Jon; Dietrich, Joern; Ose, Sandra; Gieger, Christian; Ried, Janina S; Peters, Annette; Fortier, Isabel; de Geus, Eco JC; Klovins, Janis; Zaharenko, Linda; Willemsen, Gonneke; Hottenga, Jouke-Jan; Litton, Jan-Eric; Karvanen, Juha; Boomsma, Dorret I; Groop, Leif; Rung, Johan; Palmgren, Juni; Pedersen, Nancy L; McCarthy, Mark I; van Duijn, Cornelia M; Hveem, Kristian; Metspalu, Andres; Ripatti, Samuli; Prokopenko, Inga; Harris, Jennifer R

    2016-01-01

    A wealth of biospecimen samples are stored in modern globally distributed biobanks. Biomedical researchers worldwide need to be able to combine the available resources to improve the power of large-scale studies. A prerequisite for this effort is to be able to search and access phenotypic, clinical and other information about samples that are currently stored at biobanks in an integrated manner. However, privacy issues together with heterogeneous information systems and the lack of agreed-upon vocabularies have made specimen searching across multiple biobanks extremely challenging. We describe three case studies where we have linked samples and sample descriptions in order to facilitate global searching of available samples for research. The use cases include the ENGAGE (European Network for Genetic and Genomic Epidemiology) consortium comprising at least 39 cohorts, the SUMMIT (surrogate markers for micro- and macro-vascular hard endpoints for innovative diabetes tools) consortium and a pilot for data integration between a Swedish clinical health registry and a biobank. We used the Sample avAILability (SAIL) method for data linking: first, created harmonised variables and then annotated and made searchable information on the number of specimens available in individual biobanks for various phenotypic categories. By operating on this categorised availability data we sidestep many obstacles related to privacy that arise when handling real values and show that harmonised and annotated records about data availability across disparate biomedical archives provide a key methodological advance in pre-analysis exchange of information between biobanks, that is, during the project planning phase. PMID:26306643

  14. Harmonising and linking biomedical and clinical data across disparate data archives to enable integrative cross-biobank research.

    PubMed

    Spjuth, Ola; Krestyaninova, Maria; Hastings, Janna; Shen, Huei-Yi; Heikkinen, Jani; Waldenberger, Melanie; Langhammer, Arnulf; Ladenvall, Claes; Esko, Tõnu; Persson, Mats-Åke; Heggland, Jon; Dietrich, Joern; Ose, Sandra; Gieger, Christian; Ried, Janina S; Peters, Annette; Fortier, Isabel; de Geus, Eco J C; Klovins, Janis; Zaharenko, Linda; Willemsen, Gonneke; Hottenga, Jouke-Jan; Litton, Jan-Eric; Karvanen, Juha; Boomsma, Dorret I; Groop, Leif; Rung, Johan; Palmgren, Juni; Pedersen, Nancy L; McCarthy, Mark I; van Duijn, Cornelia M; Hveem, Kristian; Metspalu, Andres; Ripatti, Samuli; Prokopenko, Inga; Harris, Jennifer R

    2016-04-01

    A wealth of biospecimen samples are stored in modern globally distributed biobanks. Biomedical researchers worldwide need to be able to combine the available resources to improve the power of large-scale studies. A prerequisite for this effort is to be able to search and access phenotypic, clinical and other information about samples that are currently stored at biobanks in an integrated manner. However, privacy issues together with heterogeneous information systems and the lack of agreed-upon vocabularies have made specimen searching across multiple biobanks extremely challenging. We describe three case studies where we have linked samples and sample descriptions in order to facilitate global searching of available samples for research. The use cases include the ENGAGE (European Network for Genetic and Genomic Epidemiology) consortium comprising at least 39 cohorts, the SUMMIT (surrogate markers for micro- and macro-vascular hard endpoints for innovative diabetes tools) consortium and a pilot for data integration between a Swedish clinical health registry and a biobank. We used the Sample avAILability (SAIL) method for data linking: first, created harmonised variables and then annotated and made searchable information on the number of specimens available in individual biobanks for various phenotypic categories. By operating on this categorised availability data we sidestep many obstacles related to privacy that arise when handling real values and show that harmonised and annotated records about data availability across disparate biomedical archives provide a key methodological advance in pre-analysis exchange of information between biobanks, that is, during the project planning phase. PMID:26306643

  15. Benchmarking in Student Affairs.

    ERIC Educational Resources Information Center

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  16. CALIPSO Borehole Instrumentation Project at Soufriere Hills Volcano, Montserrat, BWI: Data Acquisition, Telemetry, Integration, and Archival Systems

    NASA Astrophysics Data System (ADS)

    Mattioli, G. S.; Linde, A. T.; Sacks, I. S.; Malin, P. E.; Shalev, E.; Elsworth, D.; Hidayat, D.; Voight, B.; Young, S. R.; Dunkley, P. N.; Herd, R.; Norton, G.

    2003-12-01

    The CALIPSO Project (Caribbean Andesite Lava Island-volcano Precision Seismo-geodetic Observatory) has greatly enhanced the monitoring and scientific infrastructure at the Soufriere Hills Volcano, Montserrat with the recent installation of an integrated array of borehole and surface geophysical instrumentation at four sites. Each site was designed to be sufficiently hardened to withstand extreme meteorological events (e.g. hurricanes) and only require minimum routine maintenance over an expected observatory lifespan of >30 y. The sensor package at each site includes: a single-component, very broad band, Sacks-Evertson strainmeter, a three-component seismometer ( ˜Hz to 1 kHz), a Pinnacle Technologies series 5000 tiltmeter, and a surface Ashtech u-Z CGPS station with choke ring antenna, SCIGN mount and radome. This instrument package is similar to that envisioned by the Plate Boundary Observatory for deployment on EarthScope target volcanoes in western North America and thus the CALIPSO Project may be considered a prototype PBO installation with real field testing on a very active and dangerous volcano. Borehole sites were installed in series and data acquisition began immediately after the sensors were grouted into position at 200 m depth, with the first completed at Trants (5.8 km from dome) in 12-02, then Air Studios (5.2 km), Geralds (9.4 km), and Olveston (7.0 km) in 3-03. Analog data from the strainmeter (50 Hz sync) and seismometer (200 Hz) were initially digitized and locally archived using RefTek 72A-07 data acquisition systems (DAS) on loan from the PASSCAL instrument pool. Data were downloaded manually to a laptop approximately every month from initial installation until August 2003, when new systems were installed. Approximately 0.2 Tb of raw data in SEGY format have already been acquired and are currently archived at UARK for analysis by the CALIPSO science team. The July 12th dome collapse and vulcanian explosion events were recorded at 3 of the 4

  17. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  18. Integrated Data-Archive and Distributed Hydrological Modelling System for Optimized Dam Operation

    NASA Astrophysics Data System (ADS)

    Shibuo, Yoshihiro; Jaranilla-Sanchez, Patricia Ann; Koike, Toshio

    2013-04-01

    In 2012, typhoon Bopha, which passed through the southern part of the Philippines, devastated the nation leaving hundreds of death tolls and significant destruction of the country. Indeed the deadly events related to cyclones occur almost every year in the region. Such extremes are expected to increase both in frequency and magnitude around Southeast Asia, during the course of global climate change. Our ability to confront such hazardous events is limited by the best available engineering infrastructure and performance of weather prediction. An example of the countermeasure strategy is, for instance, early release of reservoir water (lowering the dam water level) during the flood season to protect the downstream region of impending flood. However, over release of reservoir water affect the regional economy adversely by losing water resources, which still have value for power generation, agricultural and industrial water use. Furthermore, accurate precipitation forecast itself is conundrum task, due to the chaotic nature of the atmosphere yielding uncertainty in model prediction over time. Under these circumstances we present a novel approach to optimize contradicting objectives of: preventing flood damage via priori dam release; while sustaining sufficient water supply, during the predicted storm events. By evaluating forecast performance of Meso-Scale Model Grid Point Value against observed rainfall, uncertainty in model prediction is probabilistically taken into account, and it is then applied to the next GPV issuance for generating ensemble rainfalls. The ensemble rainfalls drive the coupled land-surface- and distributed-hydrological model to derive the ensemble flood forecast. Together with dam status information taken into account, our integrated system estimates the most desirable priori dam release through the shuffled complex evolution algorithm. The strength of the optimization system is further magnified by the online link to the Data Integration and

  19. Supporting users through integrated retrieval, processing, and distribution systems at the Land Processes Distributed Active Archive Center

    USGS Publications Warehouse

    Kalvelage, Thomas A.; Willems, Jennifer

    2005-01-01

    The LP DAAC is the primary archive for the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) data; it is the only facility in the United States that archives, processes, and distributes data from the Advanced Spaceborne Thermal Emission/Reflection Radiometer (ASTER) on NASA's Terra spacecraft; and it is responsible for the archive and distribution of “land products” generated from data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra and Aqua satellites.

  20. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  1. FLOWTRAN-TF code benchmarking

    SciTech Connect

    Flach, G.P.

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss Of Coolant Accident (LOCA). A description of the code is given by Flach et al. (1990). This report provides benchmarking results for the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit (Smith et al., 1990a; 1990b). Individual constitutive relations are benchmarked in Sections 2 through 5 while in Sections 6 and 7 integral code benchmarking results are presented. An overall assessment of FLOWTRAN-TF for its intended use in computing the ECS power limit completes the document.

  2. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    SciTech Connect

    John D. Bess

    2009-07-01

    The purpose of this document is to identify some suggested types of experiments that can be performed in the Advanced Test Reactor Critical (ATR-C) facility. A fundamental computational investigation is provided to demonstrate possible integration of experimental activities in the ATR-C with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of

  3. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    SciTech Connect

    John D. Bess

    2009-03-01

    The purpose of this research is to provide a fundamental computational investigation into the possible integration of experimental activities with the Advanced Test Reactor Critical (ATR-C) facility with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of integral data for improving neutron cross sections. Further assessment of oscillation

  4. Benchmarking for Higher Education.

    ERIC Educational Resources Information Center

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson and Helen…

  5. Research Reactor Benchmarks

    SciTech Connect

    Ravnik, Matjaz; Jeraj, Robert

    2003-09-15

    A criticality benchmark experiment performed at the Jozef Stefan Institute TRIGA Mark II research reactor is described. This experiment and its evaluation are given as examples of benchmark experiments at research reactors. For this reason the differences and possible problems compared to other benchmark experiments are particularly emphasized. General guidelines for performing criticality benchmarks in research reactors are given. The criticality benchmark experiment was performed in a normal operating reactor core using commercially available fresh 20% enriched fuel elements containing 12 wt% uranium in uranium-zirconium hydride fuel material. Experimental conditions to minimize experimental errors and to enhance computer modeling accuracy are described. Uncertainties in multiplication factor due to fuel composition and geometry data are analyzed by sensitivity analysis. The simplifications in the benchmark model compared to the actual geometry are evaluated. Sample benchmark calculations with the MCNP and KENO Monte Carlo codes are given.

  6. The NASA Exoplanet Archive

    NASA Astrophysics Data System (ADS)

    Ramirez, Solange; Akeson, R. L.; Ciardi, D.; Kane, S. R.; Plavchan, P.; von Braun, K.; NASA Exoplanet Archive Team

    2013-01-01

    The NASA Exoplanet Archive is an online service that compiles and correlates astronomical information on extra solar planets and their host stars. The data in the archive include exoplanet parameters (such as orbits, masses, and radii), associated data (such as published radial velocity curves, photometric light curves, images, and spectra), and stellar parameters (such as magnitudes, positions, and temperatures). All the archived data are linked to the original literature reference.The archive provides tools to work with these data, including interactive tables (with plotting capabilities), interactive light curve viewer, periodogram service, transit and ephemeris calculator, and application program interface.The NASA Exoplanet Archive is the U.S. portal to the public CoRoT mission data for both the Exoplanet and Asteroseismology data sets. The NASA Exoplanet Archive also serves data related to Kepler Objects of Interest (Planet Candidates and the Kepler False Positives, KOI) in an integrated and interactive table containing stellar and transit parameters. In support of the Kepler Extended Mission, the NASA Exoplanet Archive will host transit modeling parameters, centroid results, several statistical values, and summary and detailed reports for all transit-like events identified by the Kepler Pipeline. To access this information visit us at: http://exoplanetarchive.ipac.caltech.edu

  7. Sensor to User - NASA/EOS Data for Coastal Zone Management Applications Developed from Integrated Analyses: Verification, Validation and Benchmark Report

    NASA Technical Reports Server (NTRS)

    Hall, Callie; Arnone, Robert

    2006-01-01

    The NASA Applied Sciences Program seeks to transfer NASA data, models, and knowledge into the hands of end-users by forming links with partner agencies and associated decision support tools (DSTs). Through the NASA REASoN (Research, Education and Applications Solutions Network) Cooperative Agreement, the Oceanography Division of the Naval Research Laboratory (NRLSSC) is developing new products through the integration of data from NASA Earth-Sun System assets with coastal ocean forecast models and other available data to enhance coastal management in the Gulf of Mexico. The recipient federal agency for this research effort is the National Oceanic and Atmospheric Administration (NOAA). The contents of this report detail the effort to further the goals of the NASA Applied Sciences Program by demonstrating the use of NASA satellite products combined with data-assimilating ocean models to provide near real-time information to maritime users and coastal managers of the Gulf of Mexico. This effort provides new and improved capabilities for monitoring, assessing, and predicting the coastal environment. Coastal managers can exploit these capabilities through enhanced DSTs at federal, state and local agencies. The project addresses three major issues facing coastal managers: 1) Harmful Algal Blooms (HABs); 2) hypoxia; and 3) freshwater fluxes to the coastal ocean. A suite of ocean products capable of describing Ocean Weather is assembled on a daily basis as the foundation for this semi-operational multiyear effort. This continuous realtime capability brings decision makers a new ability to monitor both normal and anomalous coastal ocean conditions with a steady flow of satellite and ocean model conditions. Furthermore, as the baseline data sets are used more extensively and the customer list increased, customer feedback is obtained and additional customized products are developed and provided to decision makers. Continual customer feedback and response with new improved

  8. Benchmarking foreign electronics technologies

    SciTech Connect

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  9. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  10. The Planetary Archive

    NASA Astrophysics Data System (ADS)

    Penteado, Paulo F.; Trilling, David; Szalay, Alexander; Budavári, Tamás; Fuentes, César

    2014-11-01

    We are building the first system that will allow efficient data mining in the astronomical archives for observations of Solar System Bodies. While the Virtual Observatory has enabled data-intensive research making use of large collections of observations across multiple archives, Planetary Science has largely been denied this opportunity: most astronomical data services are built based on sky positions, and moving objects are often filtered out.To identify serendipitous observations of Solar System objects, we ingest the archive metadata. The coverage of each image in an archive is a volume in a 3D space (RA,Dec,time), which we can represent efficiently through a hierarchical triangular mesh (HTM) for the spatial dimensions, plus a contiguous time interval. In this space, an asteroid occupies a curve, which we determine integrating its orbit into the past. Thus when an asteroid trajectory intercepts the volume of an archived image, we have a possible observation of that body. Our pipeline then looks in the archive's catalog for a source with the corresponding coordinates, to retrieve its photometry. All these matches are stored into a database, which can be queried by object identifier.This database consists of archived observations of known Solar System objects. This means that it grows not only from the ingestion of new images, but also from the growth in the number of known objects. As new bodies are discovered, our pipeline can find archived observations where they could have been recorded, providing colors for these newly-found objects. This growth becomes more relevant with the new generation of wide-field surveys, particularly LSST.We also present one use case of our prototype archive: after ingesting the metadata for SDSS, 2MASS and GALEX, we were able to identify serendipitous observations of Solar System bodies in these 3 archives. Cross-matching these occurrences provided us with colors from the UV to the IR, a much wider spectral range than that

  11. Making Benchmark Testing Work

    ERIC Educational Resources Information Center

    Herman, Joan L.; Baker, Eva L.

    2005-01-01

    Many schools are moving to develop benchmark tests to monitor their students' progress toward state standards throughout the academic year. Benchmark tests can provide the ongoing information that schools need to guide instructional programs and to address student learning problems. The authors discuss six criteria that educators can use to…

  12. Thermal Performance Benchmarking (Presentation)

    SciTech Connect

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  13. Taming the "Beast": An Archival Management System Based on EAD

    ERIC Educational Resources Information Center

    Levine, Jennie A.; Evans, Jennifer; Kumar, Amit

    2006-01-01

    In April 2005, the University of Maryland Libraries launched "ArchivesUM" (www.lib.umd.edu/archivesum), an online database of finding aids for manuscript and archival collections using Encoded Archival Description (EAD). "ArchivesUM," however, is only the publicly available end-product of a much larger project-an integrated system that ties…

  14. An integrated multi-medial approach to cultural heritage conservation and documentation: from remotely-sensed lidar imaging to historical archive data

    NASA Astrophysics Data System (ADS)

    Raimondi, Valentina; Palombi, Lorenzo; Morelli, Annalisa; Chimenti, Massimo; Penoni, Sara; Dercks, Ute; Andreotti, Alessia; Bartolozzi, Giovanni; Bini, Marco; Bonaduce, Ilaria; Bracci, Susanna; Cantisani, Emma; Colombini, M. Perla; Cucci, Costanza; Fenelli, Laura; Galeotti, Monica; Malesci, Irene; Malquori, Alessandra; Massa, Emmanuela; Montanelli, Marco; Olmi, Roberto; Picollo, Marcello; Pierelli, Louis D.; Pinna, Daniela; Riminesi, Cristiano; Rutigliano, Sara; Sacchi, Barbara; Stella, Sergio; Tonini, Gabriella

    2015-10-01

    Fluorescence LIDAR imaging has been already proposed in several studies as a valuable technique for the remote diagnostics and documentation of the monumental surfaces, with main applications referring to the detection and classification of biodeteriogens, the characterization of lithotypes, the detection and characterization protective coatings and also of some types of pigments. However, the conservation and documentation of the cultural heritage is an application field where a highly multi-disciplinary, integrated approach is typically required. In this respect, the fluorescence LIDAR technique can be particularly useful to provide an overall assessment of the whole investigated surface, which can be profitably used to identify those specific areas in which further analytical measurements or sampling for laboratory analysis are needed. This paper presents some representative examples of the research carried out in the frame of the PRIMARTE project, with particular reference to the LIDAR data and their significance in conjunction with the other applied techniques. One of the major objectives of the project, actually, was the development of an integrated methodology for the combined use of data by using diverse techniques: from fluorescence LIDAR remote sensing to UV fluorescence and IR imaging, from IR thermography, georadar, 3D electric tomography to microwave reflectometry, from analytical techniques (FORS, FT-IR, GC-MS) to high resolution photo-documentation and historical archive studies. This method was applied to a 'pilot site', a chapel dating back to the fourteenth century, situated at 'Le Campora' site in the vicinity of Florence. All data have been integrated in a multi-medial tool for archiving, management, exploitation and dissemination purposes.

  15. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  16. Benchmark calculations from summarized data: an example

    SciTech Connect

    Crump, K. S.; Teeguarden, Justin G.

    2009-03-01

    Benchmark calculations often are made from data extracted from publications. Such datamay not be in a formmost appropriate for benchmark analysis, and, as a result, suboptimal and/or non-standard benchmark analyses are often applied. This problem can be mitigated in some cases using Monte Carlo computational methods that allow the likelihood of the published data to be calculated while still using an appropriate benchmark dose (BMD) definition. Such an approach is illustrated herein using data from a study of workers exposed to styrene, in which a hybrid BMD calculation is implemented from dose response data reported only as means and standard deviations of ratios of scores on neuropsychological tests from exposed subjects to corresponding scores from matched controls. The likelihood of the data is computed using a combination of analytic and Monte Carlo integration methods.

  17. Archiving Derrida

    ERIC Educational Resources Information Center

    Morris, Marla

    2003-01-01

    Derrida's archive, broadly speaking, is brilliantly mad, for he digs exegetically into the most difficult textual material and combines the most unlikely texts--from Socrates to Freud, from postcards to encyclopedias, from madness(es) to the archive, from primal scenes to death. In this paper, the author would like to do a brief study of the…

  18. A performance geodynamo benchmark

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Heien, E. M.

    2014-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. However, to approach the parameters regime of the Earth's outer core, we need massively parallel computational environment for extremely large spatial resolutions. Local methods are expected to be more suitable for massively parallel computation because the local methods needs less data communication than the spherical harmonics expansion, but only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, some numerical dynamo models using spherical harmonics expansion has performed successfully with thousands of processes. We perform benchmark tests to asses various numerical methods to asses the next generation of geodynamo simulations. The purpose of the present benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the present study, we will report the results of the performance benchmark. We perform the participated dynamo models under the same computational environment (XSEDE TACC Stampede), and investigate computational performance. To simplify the problem, we choose the same model and parameter regime as the accuracy benchmark test, but perform the simulations with much finer spatial resolutions as possible to investigate computational capability (e

  19. Toxicological Benchmarks for Wildlife

    SciTech Connect

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk

  20. NASA Software Engineering Benchmarking Effort

    NASA Technical Reports Server (NTRS)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  1. Seasonal Distributions and Migrations of Northwest Atlantic Swordfish: Inferences from Integration of Pop-Up Satellite Archival Tagging Studies

    PubMed Central

    Neilson, John D.; Loefer, Josh; Prince, Eric D.; Royer, François; Calmettes, Beatriz; Gaspar, Philippe; Lopez, Rémy; Andrushchenko, Irene

    2014-01-01

    Data sets from three laboratories conducting studies of movements and migrations of Atlantic swordfish (Xiphias gladius) using pop-up satellite archival tags were pooled, and processed using a common methodology. From 78 available deployments, 38 were selected for detailed examination based on deployment duration. The points of deployment ranged from southern Newfoundland to the Straits of Florida. The aggregate data comprise the most comprehensive information describing migrations of swordfish in the Atlantic. Challenges in using data from different tag manufacturers are discussed. The relative utility of geolocations obtained with light is compared with results derived from temperature information for this deep-diving species. The results show that fish tagged off North America remain in the western Atlantic throughout their deployments. This is inconsistent with the model of stock structure used in assessments conducted by the International Commission for the Conservation of Atlantic Tunas, which assumes that fish mix freely throughout the North Atlantic. PMID:25401964

  2. TWODANT benchmark. Progress report

    SciTech Connect

    Lee, Sung

    1994-01-11

    TWODANT (Two-Dimensional, Diffusion-Accelerated, Neutral-Particle Transport) code has been benchmarked against 6 critical experiments (Jezebel plutonium critical assembly) and their k effective values compared with those of KENO and MCNP codes.

  3. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  4. BENCHMARKING SUSTAINABILITY ENGINEERING EDUCATION

    EPA Science Inventory

    The goals of this project are to develop and apply a methodology for benchmarking curricula in sustainability engineering and to identify individuals active in sustainability engineering education.

  5. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  6. Data archiving

    NASA Technical Reports Server (NTRS)

    Pitts, David

    1991-01-01

    The viewgraphs of a discussion on data archiving presented at the National Space Science Data Center (NSSDC) Mass Storage Workshop is included. The mass storage system at the National Center for Atmospheric Research (NCAR) is described. Topics covered in the presentation include product goals, data library systems (DLS), client system commands, networks, archival devices, DLS features, client application systems, multiple mass storage devices, and system growth.

  7. Integration of temporal subtraction and nodule detection system for digital chest radiographs into picture archiving and communication system (PACS): four-year experience.

    PubMed

    Sakai, Shuji; Yabuuchi, Hidetake; Matsuo, Yoshio; Okafuji, Takashi; Kamitani, Takeshi; Honda, Hiroshi; Yamamoto, Keiji; Fujiwara, Keiichi; Sugiyama, Naoki; Doi, Kunio

    2008-03-01

    Since May 2002, temporal subtraction and nodule detection systems for digital chest radiographs have been integrated into our hospital's picture archiving and communication systems (PACS). Image data of digital chest radiographs were stored in PACS with the digital image and communication in medicine (DICOM) protocol. Temporal subtraction and nodule detection images were produced automatically in an exclusive server and delivered with current and previous images to the work stations. The problems that we faced and the solutions that we arrived at were analyzed. We encountered four major problems. The first problem, as a result of the storage of the original images' data with the upside-down, reverse, or lying-down positioning on portable chest radiographs, was solved by postponing the original data storage for 30 min. The second problem, the variable matrix sizes of chest radiographs obtained with flat-panel detectors (FPDs), was solved by improving the computer algorithm to produce consistent temporal subtraction images. The third problem, the production of temporal subtraction images of low quality, could not be solved fundamentally when the original images were obtained with different modalities. The fourth problem, an excessive false-positive rate on the nodule detection system, was solved by adjusting this system to chest radiographs obtained in our hospital. Integration of the temporal subtraction and nodule detection system into our hospital's PACS was customized successfully; this experience may be helpful to other hospitals. PMID:17333415

  8. PNNL Information Technology Benchmarking

    SciTech Connect

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  9. Strategy of DIN-PACS benchmark testing

    NASA Astrophysics Data System (ADS)

    Norton, Gary S.; Lyche, David K.; Richardson, Nancy E.; Thomas, Jerry A.; Romlein, John R.; Cawthon, Michael A.; Lawrence, David P.; Shelton, Philip D.; Parr, Laurence F.; Richardson, Ronald R., Jr.; Johnson, Steven L.

    1998-07-01

    The Digital Imaging Network -- Picture Archive and Communication System (DIN-PACS) procurement is the Department of Defense's (DoD) effort to bring military medical treatment facilities into the twenty-first century with nearly filmless digital radiology departments. The DIN-PACS procurement is unique from most of the previous PACS acquisitions in that the Request for Proposals (RFP) required extensive benchmark testing prior to contract award. The strategy for benchmark testing was a reflection of the DoD's previous PACS and teleradiology experiences. The DIN-PACS Technical Evaluation Panel (TEP) consisted of DoD and civilian radiology professionals with unique clinical and technical PACS expertise. The TEP considered nine items, key functional requirements to the DIN-PACS acquisition: (1) DICOM Conformance, (2) System Storage and Archive, (3) Workstation Performance, (4) Network Performance, (5) Radiology Information System (RIS) functionality, (6) Hospital Information System (HIS)/RIS Interface, (7) Teleradiology, (8) Quality Control, and (9) System Reliability. The development of a benchmark test to properly evaluate these key requirements would require the TEP to make technical, operational, and functional decisions that had not been part of a previous PACS acquisition. Developing test procedures and scenarios that simulated inputs from radiology modalities and outputs to soft copy workstations, film processors, and film printers would be a major undertaking. The goals of the TEP were to fairly assess each vendor's proposed system and to provide an accurate evaluation of each system's capabilities to the source selection authority, so the DoD could purchase a PACS that met the requirements in the RFP.

  10. Accelerated randomized benchmarking

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Ferrie, Christopher; Cory, D. G.

    2015-01-01

    Quantum information processing offers promising advances for a wide range of fields and applications, provided that we can efficiently assess the performance of the control applied in candidate systems. That is, we must be able to determine whether we have implemented a desired gate, and refine accordingly. Randomized benchmarking reduces the difficulty of this task by exploiting symmetries in quantum operations. Here, we bound the resources required for benchmarking and show that, with prior information, we can achieve several orders of magnitude better accuracy than in traditional approaches to benchmarking. Moreover, by building on state-of-the-art classical algorithms, we reach these accuracies with near-optimal resources. Our approach requires an order of magnitude less data to achieve the same accuracies and to provide online estimates of the errors in the reported fidelities. We also show that our approach is useful for physical devices by comparing to simulations.

  11. Benchmarking. It's the future.

    PubMed

    Fazzi, Robert A; Agoglia, Robert V; Harlow, Lynn

    2002-11-01

    You can't go to a state conference, read a home care publication or log on to an Internet listserv ... without hearing or reading someone ... talk about benchmarking. What are your average case mix weights? How many visits are your nurses averaging per day? What is your average caseload for full time nurses in the field? What is your profit or loss per episode? The benchmark systems now available in home care potentially can serve as an early warning and partial protection for agencies. Agencies can collect data, analyze the outcomes, and through comparative benchmarking, determine where they are competitive and where they need to improve. These systems clearly provide agencies with the opportunity to be more proactive. PMID:12436898

  12. The FTIO Benchmark

    NASA Technical Reports Server (NTRS)

    Fagerstrom, Frederick C.; Kuszmaul, Christopher L.; Woo, Alex C. (Technical Monitor)

    1999-01-01

    We introduce a new benchmark for measuring the performance of parallel input/ouput. This benchmark has flexible initialization. size. and scaling properties that allows it to satisfy seven criteria for practical parallel I/O benchmarks. We obtained performance results while running on the a SGI Origin2OOO computer with various numbers of processors: with 4 processors. the performance was 68.9 Mflop/s with 0.52 of the time spent on I/O, with 8 processors the performance was 139.3 Mflop/s with 0.50 of the time spent on I/O, with 16 processors the performance was 173.6 Mflop/s with 0.43 of the time spent on I/O. and with 32 processors the performance was 259.1 Mflop/s with 0.47 of the time spent on I/O.

  13. New Test Set for Video Quality Benchmarking

    NASA Astrophysics Data System (ADS)

    Raventos, Joaquin

    A new test set design and benchmarking approach (US Patent pending) allows a "standard observer" to assess the end-to-end image quality characteristics of video imaging systems operating in day time or low-light conditions. It uses randomized targets based on extensive application of Photometry, Geometrical Optics, and Digital Media. The benchmarking takes into account the target's contrast sensitivity, its color characteristics, and several aspects of human vision such as visual acuity and dynamic response. The standard observer is part of the "extended video imaging system" (EVIS). The new test set allows image quality benchmarking by a panel of standard observers at the same time. The new approach shows that an unbiased assessment can be guaranteed. Manufacturers, system integrators, and end users will assess end-to-end performance by simulating a choice of different colors, luminance levels, and dynamic conditions in the laboratory or in permanent video systems installations.

  14. Moving the Archivist Closer to the Creator: Implementing Integrated Archival Policies for Born Digital Photography at Colleges and Universities

    ERIC Educational Resources Information Center

    Keough, Brian; Wolfe, Mark

    2012-01-01

    This article discusses integrated approaches to the management and preservation of born digital photography. It examines the changing practices among photographers, and the needed relationships between the photographers using digital technology and the archivists responsible for acquiring their born digital images. Special consideration is given…

  15. Archiving TNG Data

    NASA Astrophysics Data System (ADS)

    Pasian, Fabio

    The TNG (Telescopio Nazionale Galileo), a 3.5 meter telescope derived from ESO's NTT which will see first light in La Palma during 1996, will be one of the first cases where operations will be carried out following an end-to-end data management scheme. An archive of both technical and scientific data will be produced directly at the telescope as a natural extension of the data handling chain. This is possible thanks to the total integration of the data management facilities with the telescope control system. In this paper, the archive system at the TNG is described in terms of archiving facilities, production of hard media and exportable database tables, on-line technical, calibration and transit archives, interaction with the quick-look utilities for the different instruments, and data access and retrieval mechanisms. The interfaces of the system with other TNG subsystems are discussed, and first results obtained testing a prototype implementation with a simulated data flow are shown.

  16. Radiation Embrittlement Archive Project

    SciTech Connect

    Klasky, Hilda B; Bass, Bennett Richard; Williams, Paul T; Phillips, Rick; Erickson, Marjorie A; Kirk, Mark T; Stevens, Gary L

    2013-01-01

    The Radiation Embrittlement Archive Project (REAP), which is being conducted by the Probabilistic Integrity Safety Assessment (PISA) Program at Oak Ridge National Laboratory under funding from the U.S. Nuclear Regulatory Commission s (NRC) Office of Nuclear Regulatory Research, aims to provide an archival source of information about the effect of neutron radiation on the properties of reactor pressure vessel (RPV) steels. Specifically, this project is an effort to create an Internet-accessible RPV steel embrittlement database. The project s website, https://reap.ornl.gov, provides information in two forms: (1) a document archive with surveillance capsule(s) reports and related technical reports, in PDF format, for the 104 commercial nuclear power plants (NPPs) in the United States, with similar reports from other countries; and (2) a relational database archive with detailed information extracted from the reports. The REAP project focuses on data collected from surveillance capsule programs for light-water moderated, nuclear power reactor vessels operated in the United States, including data on Charpy V-notch energy testing results, tensile properties, composition, exposure temperatures, neutron flux (rate of irradiation damage), and fluence, (Fast Neutron Fluence a cumulative measure of irradiation for E>1 MeV). Additionally, REAP contains data from surveillance programs conducted in other countries. REAP is presently being extended to focus on embrittlement data analysis, as well. This paper summarizes the current status of the REAP database and highlights opportunities to access the data and to participate in the project.

  17. Software Archive Related Issues

    NASA Technical Reports Server (NTRS)

    Angelini, Lorella

    2008-01-01

    With the archive opening of the major X-ray and Gamma ray missions, the school is intended to provide information on the resource available in the data archive and the public software. This talk reviews the archive content, the data format for the major active missions Chandra, XMM-Newton, Swift, RXTE, Integral and Suzaku and the available software for each of these missions. It will explain the FITS format in general and the specific layout for the most popular mission, explaining the role of keywords and how they fit in the multimission standard approach embrace by the High Energy Community. Specifically, it reviews : the difference data levels and the difference software applicable; the popular/standard method of analysis for high level products such as spectra, timing and images; the role of calibration in the multi mission approach; how to navigate the archive query databases. It will present also how the school is organized and how the information provided will be relevant to each of the afternoon science projects that will be proposed to the students and led by a project leader

  18. Issues surrounding PACS archiving to external, third-party DICOM archives.

    PubMed

    Langer, Steve

    2009-03-01

    In larger health care imaging institutions, it is becoming increasingly obvious that separate image archives for every department are not cost effective or scalable. The solution is to have each department's picture archiving communication system (PACS) have only a local cache, and archive to an enterprise archive that drives a universal clinical viewer. It sounds simple, but how many PACS can truly work with a third-party Integration of the Health Care Enterprise Compliant Image Archive? The answer is somewhat disappointing. PMID:18449605

  19. Changes in Benchmarked Training.

    ERIC Educational Resources Information Center

    Bassi, Laurie J.; Cheney, Scott

    1996-01-01

    Comparisons of the training practices of large companies confirm that the delivery and financing of training is changing rapidly. Companies in the American Society for Training and Development Benchmarking Forum are delivering less training with permanent staff and more with strategic use of technology, contract staff, and external providers,…

  20. Monte Carlo Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  1. Benchmarks Momentum on Increase

    ERIC Educational Resources Information Center

    McNeil, Michele

    2008-01-01

    No longer content with the patchwork quilt of assessments used to measure states' K-12 performance, top policy groups are pushing states toward international benchmarking as a way to better prepare students for a competitive global economy. The National Governors Association, the Council of Chief State School Officers, and the standards-advocacy…

  2. Benchmarking the World's Best

    ERIC Educational Resources Information Center

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  3. NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  4. Benchmarks: WICHE Region 2012

    ERIC Educational Resources Information Center

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  5. Python/Lua Benchmarks

    SciTech Connect

    Busby, L.

    2014-08-01

    This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.

  6. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology. Appendix C -- Finite Element Models Solution Database File, Appendix D -- Benchmark Finite Element Models Solution Database File

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  7. Benchmarking ICRF simulations for ITER

    SciTech Connect

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  8. Sequoia Messaging Rate Benchmark

    SciTech Connect

    Friedley, Andrew

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8) with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.

  9. Sequoia Messaging Rate Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8)more » with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less

  10. Radiography benchmark 2014

    NASA Astrophysics Data System (ADS)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  11. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  12. Radiography benchmark 2014

    SciTech Connect

    Jaenisch, G.-R. Deresch, A. Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  13. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    deWit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure, This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  14. Algebraic Multigrid Benchmark

    SciTech Connect

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumps and an anisotropy in one part.

  15. MPI Multicore Linktest Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2008-01-25

    The MPI Multicore Linktest (LinkTest) measures the aggregate bandwidth from/to a multicore node in a parallel system. It allows the user to specify a variety of different node layout and communication routine variations and reports the maximal observed bandwidth across all specified options. In particular, this benchmark is able to vary the number of tasks on the root node and thereby allows users to study the impact of multicore architectures on MPI communication performance.

  16. Benchmarking the billing office.

    PubMed

    Woodcock, Elizabeth W; Williams, A Scott; Browne, Robert C; King, Gerald

    2002-09-01

    Benchmarking data related to human and financial resources in the billing process allows an organization to allocate its resources more effectively. Analyzing human resources used in the billing process helps determine cost-effective staffing. The deployment of human resources in a billing office affects timeliness of payment and ability to maximize revenue potential. Analyzing financial resource helps an organization allocate those resources more effectively. PMID:12235973

  17. [California State Archives.

    ERIC Educational Resources Information Center

    Rea, Jay W.

    The first paper on the California State Archives treats the administrative status, legal basis of the archives program, and organization of the archives program. The problem areas in this States' archival program are discussed at length. The second paper gives a crude sketch of the legal and administrative history of the California State Archives,…

  18. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  19. From archives to picture archiving and communications systems.

    PubMed

    Verhelle, F; Van den Broeck, R; Osteaux, M

    1995-12-01

    Keeping organised and consistent film archives is a well-known problem in the radiological world. With the introduction of digital modalities (CT, MR,...) the idea of archiving the image data in a non common way was born. The aim is to keep the information in digital form from acquisition to destination, e.g. archives, viewing station, teleradiology, a task that was not as easy as some people believed, due to bare technical possibilities and to the lack of standards concerning medical image data. These reasons made it not so common to integrate components of different origins into a digital Picture Archiving and Communication environment. How should we attempt to integrate the analogue examinations? It is ridiculous to exclude the conventional XR-examination that accounts for more than 70% of the total production. We believe that there will be a migration to light-stimulable phosphor plates, but these are not yet user friendly and certainly not cost effective. We have similar problems of immature technology as we had for the digital modalities. In a first attempt the bridge can be crossed, between the two worlds by means of converters (laser scanner, CCD camera). PACS will become a reality in the future as almost all examinations will be digitalized. We are now in a transition period with its inconveniences, but we will gain a lot soon. The migration from piles of films through a computer assisted radiological archiving system to a full digital environment is sketched in a historical survey. PMID:8576029

  20. Collection of Neutronic VVER Reactor Benchmarks.

    Energy Science and Technology Software Center (ESTSC)

    2002-01-30

    Version 00 A system of computational neutronic benchmarks has been developed. In this CD-ROM report, the data generated in the course of the project are reproduced in their integrity with minor corrections. The editing that was performed on the various documents comprising this report was primarily meant to facilitate the production of the CD-ROM and to enable electronic retrieval of the information. The files are electronically navigable.

  1. Benchmark Generation using Domain Specific Modeling

    SciTech Connect

    Bui, Ngoc B.; Zhu, Liming; Gorton, Ian; Liu, Yan

    2007-08-01

    Performance benchmarks are domain specific applications that are specialized to a certain set of technologies and platforms. The development of a benchmark application requires mapping the performance specific domain concepts to an implementation and producing complex technology and platform specific code. Domain Specific Modeling (DSM) promises to bridge the gap between application domains and implementations by allowing designers to specify solutions in domain-specific abstractions and semantics through Domain Specific Languages (DSL). This allows generation of a final implementation automatically from high level models. The modeling and task automation benefits obtained from this approach usually justify the upfront cost involved. This paper employs a DSM based approach to invent a new DSL, DSLBench, for benchmark generation. DSLBench and its associated code generation facilities allow the design and generation of a completely deployable benchmark application for performance testing from a high level model. DSLBench is implemented using Microsoft Domain Specific Language toolkit. It is integrated with the Visual Studio 2005 Team Suite as a plug-in to provide extra modeling capabilities for performance testing. We illustrate the approach using a case study based on .Net and C#.

  2. The new European Hubble archive

    NASA Astrophysics Data System (ADS)

    De Marchi, Guido; Arevalo, Maria; Merin, Bruno

    2016-01-01

    The European Hubble Archive (hereafter eHST), hosted at ESA's European Space Astronomy Centre, has been released for public use in October 2015. The eHST is now fully integrated with the other ESA science archives to ensure long-term preservation of the Hubble data, consisting of more than 1 million observations from 10 different scientific instruments. The public HST data, the Hubble Legacy Archive, and the high-level science data products are now all available to scientists through a single, carefully designed and user friendly web interface. In this talk, I will show how the the eHST can help boost archival research, including how to search on sources in the field of view thanks to precise footprints projected onto the sky, how to obtain enhanced previews of imaging data and interactive spectral plots, and how to directly link observations with already published papers. To maximise the scientific exploitation of Hubble's data, the eHST offers connectivity to virtual observatory tools, easily integrates with the recently released Hubble Source Catalog, and is fully accessible through ESA's archives multi-mission interface.

  3. Benchmark Data Through The International Reactor Physics Experiment Evaluation Project (IRPHEP)

    SciTech Connect

    J. Blair Briggs; Dr. Enrico Sartori

    2005-09-01

    The International Reactor Physics Experiments Evaluation Project (IRPhEP) was initiated by the Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency’s (NEA) Nuclear Science Committee (NSC) in June of 2002. The IRPhEP focus is on the derivation of internationally peer reviewed benchmark models for several types of integral measurements, in addition to the critical configuration. While the benchmarks produced by the IRPhEP are of primary interest to the Reactor Physics Community, many of the benchmarks can be of significant value to the Criticality Safety and Nuclear Data Communities. Benchmarks that support the Next Generation Nuclear Plant (NGNP), for example, also support fuel manufacture, handling, transportation, and storage activities and could challenge current analytical methods. The IRPhEP is patterned after the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and is closely coordinated with the ICSBEP. This paper highlights the benchmarks that are currently being prepared by the IRPhEP that are also of interest to the Criticality Safety Community. The different types of measurements and associated benchmarks that can be expected in the first publication and beyond are described. The protocol for inclusion of IRPhEP benchmarks as ICSBEP benchmarks and for inclusion of ICSBEP benchmarks as IRPhEP benchmarks is detailed. The format for IRPhEP benchmark evaluations is described as an extension of the ICSBEP format. Benchmarks produced by the IRPhEP add new dimension to criticality safety benchmarking efforts and expand the collection of available integral benchmarks for nuclear data testing. The first publication of the "International Handbook of Evaluated Reactor Physics Benchmark Experiments" is scheduled for January of 2006.

  4. Archiving tools for EOS

    NASA Astrophysics Data System (ADS)

    Sindrilaru, Elvin-Alin; Peters, Andreas-Joachim; Duellmann, Dirk

    2015-12-01

    Archiving data to tape is a critical operation for any storage system, especially for the EOS system at CERN which holds production data for all major LHC experiments. Each collaboration has an allocated quota it can use at any given time therefore, a mechanism for archiving "stale" data is needed so that storage space is reclaimed for online analysis operations. The archiving tool that we propose for EOS aims to provide a robust client interface for moving data between EOS and CASTOR (tape backed storage system) while enforcing best practices when it comes to data integrity and verification. All data transfers are done using a third-party copy mechanism which ensures point-to- point communication between the source and destination, thus providing maximum aggregate throughput. Using ZMQ message-passing paradigm and a process-based approach enabled us to achieve optimal utilisation of the resources and a stateless architecture which can easily be tuned during operation. The modular design and the implementation done in a high-level language like Python, has enabled us to easily extended the code base to address new demands like offering full and incremental backup capabilities.

  5. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  6. Benchmarking emerging logic devices

    NASA Astrophysics Data System (ADS)

    Nikonov, Dmitri

    2014-03-01

    As complementary metal-oxide-semiconductor field-effect transistors (CMOS FET) are being scaled to ever smaller sizes by the semiconductor industry, the demand is growing for emerging logic devices to supplement CMOS in various special functions. Research directions and concepts of such devices are overviewed. They include tunneling, graphene based, spintronic devices etc. The methodology to estimate future performance of emerging (beyond CMOS) devices and simple logic circuits based on them is explained. Results of benchmarking are used to identify more promising concepts and to map pathways for improvement of beyond CMOS computing.

  7. Benchmark on outdoor scenes

    NASA Astrophysics Data System (ADS)

    Zhang, Hairong; Wang, Cheng; Chen, Yiping; Jia, Fukai; Li, Jonathan

    2016-03-01

    Depth super-resolution is becoming popular in computer vision, and most of test data is based on indoor data sets with ground-truth measurements such as Middlebury. However, indoor data sets mainly are acquired from structured light techniques under ideal conditions, which cannot represent the objective world with nature light. Unlike indoor scenes, the uncontrolled outdoor environment is much more complicated and is rich both in visual and depth texture. For that reason, we develop a more challenging and meaningful outdoor benchmark for depth super-resolution using the state-of-the-art active laser scanning system.

  8. Algebraic Multigrid Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumpsmore » and an anisotropy in one part.« less

  9. Benchmarking Using Basic DBMS Operations

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  10. Core Benchmarks Descriptions

    SciTech Connect

    Pavlovichev, A.M.

    2001-05-24

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented.

  11. Finding a benchmark for monitoring hospital cleanliness.

    PubMed

    Mulvey, D; Redding, P; Robertson, C; Woodall, C; Kingsmore, P; Bedwell, D; Dancer, S J

    2011-01-01

    This study evaluated three methods for monitoring hospital cleanliness. The aim was to find a benchmark that could indicate risk to patients from a contaminated environment. We performed visual monitoring, ATP bioluminescence and microbiological screening of five clinical surfaces before and after detergent-based cleaning on two wards over a four-week period. Five additional sites that were not featured in the routine domestic specification were also sampled. Measurements from all three methods were integrated and compared in order to choose appropriate levels for routine monitoring. We found that visual assessment did not reflect ATP values nor environmental contamination with microbial flora including Staphylococcus aureus and meticillin-resistant S. aureus (MRSA). There was a relationship between microbial growth categories and the proportion of ATP values exceeding a chosen benchmark but neither reliably predicted the presence of S. aureus or MRSA. ATP values were occasionally diverse. Detergent-based cleaning reduced levels of organic soil by 32% (95% confidence interval: 16-44%; P<0.001) but did not necessarily eliminate indicator staphylococci, some of which survived the cleaning process. An ATP benchmark value of 100 relative light units offered the closest correlation with microbial growth levels <2.5 cfu/cm(2) (receiver operating characteristic ROC curve sensitivity: 57%; specificity: 57%). In conclusion, microbiological and ATP monitoring confirmed environmental contamination, persistence of hospital pathogens and measured the effect on the environment from current cleaning practices. This study has provided provisional benchmarks to assist with future assessment of hospital cleanliness. Further work is required to refine practical sampling strategy and choice of benchmarks. PMID:21129820

  12. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  13. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  14. Archives in Pakistan

    ERIC Educational Resources Information Center

    Haider, Syed Jalaluddin

    2004-01-01

    This article traces the origins and development of archives in Pakistan. The focus is on the National Archives of Pakistan, but also includes a discussion of the archival collections at the provincial and district levels. This study further examines the state of training facilities available to Pakistani archivists. Archival development has been…

  15. Reference Services in Archives.

    ERIC Educational Resources Information Center

    Whalen, Lucille; And Others

    1986-01-01

    This 16-article issue focuses on history, policy, services, users, organization, evaluation, and automation of the archival reference process. Collections at academic research libraries, a technical university, Board of Education, business archives, a bank, labor and urban archives, a manuscript repository, religious archives, and regional history…

  16. HEASARC Software Archive

    NASA Technical Reports Server (NTRS)

    White, Nicholas (Technical Monitor); Murray, Stephen S.

    2003-01-01

    multiple world coordinate systems, three dimensional event file binning, image smoothing, region groups and tags, the ability to save images in a number of image formats (such as JPEG, TIFF, PNG, FITS), improvements in support for integrating external analysis tools, and support for the virtual observatory. In particular, a full-featured web browser has been implemented within D S 9 . This provides support for full access to HEASARC archive sites such as SKYVIEW and W3BROWSE, in addition to other astronomical archives sites such as MAST, CHANDRA, ADS, NED, SIMBAD, IRAS, NVRO, SA0 TDC, and FIRST. From within DS9, the archives can be searched, and FITS images, plots, spectra, and journal abstracts can be referenced, downloaded and displayed The web browser provides the basis for the built-in help facility. All DS9 documentation, including the reference manual, FAQ, Know Features, and contact information is now available to the user without the need for external display applications. New versions of DS9 maybe downloaded and installed using this facility. Two important features used in the analysis of high energy astronomical data have been implemented in the past year. The first is support for binning photon event data in three dimensions. By binning the third dimension in time or energy, users are easily able to detect variable x-ray sources and identify other physical properties of their data. Second, a number of fast smoothing algorithms have been implemented in DS9, which allow users to smooth their data in real time. Algorithms for boxcar, tophat, and gaussian smoothing are supported.

  17. Internal Benchmarking for Institutional Effectiveness

    ERIC Educational Resources Information Center

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  18. Benchmarking. A Guide for Educators.

    ERIC Educational Resources Information Center

    Tucker, Sue

    This book offers strategies for enhancing a school's teaching and learning by using benchmarking, a team-research and data-driven process for increasing school effectiveness. Benchmarking enables professionals to study and know their systems and continually improve their practices. The book is designed to lead a team step by step through the…

  19. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  20. The ``One Archive'' for JWST

    NASA Astrophysics Data System (ADS)

    Greene, G.; Kyprianou, M.; Levay, K.; Sienkewicz, M.; Donaldson, T.; Dower, T.; Swam, M.; Bushouse, H.; Greenfield, P.; Kidwell, R.; Wolfe, D.; Gardner, L.; Nieto-Santisteban, M.; Swade, D.; McLean, B.; Abney, F.; Alexov, A.; Binegar, S.; Aloisi, A.; Slowinski, S.; Gousoulin, J.

    2015-09-01

    The next generation for the Space Telescope Science Institute data management system is gearing up to provide a suite of archive system services supporting the operation of the James Webb Space Telescope. We are now completing the initial stage of integration and testing for the preliminary ground system builds of the JWST Science Operations Center which includes multiple components of the Data Management Subsystem (DMS). The vision for astronomical science and research with the JWST archive introduces both solutions to formal mission requirements and innovation derived from our existing mission systems along with the collective shared experience of our global user community. We are building upon the success of the Hubble Space Telescope archive systems, standards developed by the International Virtual Observatory Alliance, and collaborations with our archive data center partners. In proceeding forward, the “one archive” architectural model presented here is designed to balance the objectives for this new and exciting mission. The STScI JWST archive will deliver high quality calibrated science data products, support multi-mission data discovery and analysis, and provide an infrastructure which supports bridges to highly valued community tools and services.

  1. FireHose Streaming Benchmarks

    Energy Science and Technology Software Center (ESTSC)

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the streammore » of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.« less

  2. FireHose Streaming Benchmarks

    SciTech Connect

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.

  3. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. PMID:22237134

  4. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design

    PubMed Central

    Pache, Roland A.; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J.; Smith, Colin A.; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a “best practice” set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available. PMID:26335248

  5. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    PubMed

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available. PMID:26335248

  6. PSO-based multiobjective optimization with dynamic population size and adaptive local archives.

    PubMed

    Leong, Wen-Fung; Yen, Gary G

    2008-10-01

    Recently, various multiobjective particle swarm optimization (MOPSO) algorithms have been developed to efficiently and effectively solve multiobjective optimization problems. However, the existing MOPSO designs generally adopt a notion to "estimate" a fixed population size sufficiently to explore the search space without incurring excessive computational complexity. To address the issue, this paper proposes the integration of a dynamic population strategy within the multiple-swarm MOPSO. The proposed algorithm is named dynamic population multiple-swarm MOPSO. An additional feature, adaptive local archives, is designed to improve the diversity within each swarm. Performance metrics and benchmark test functions are used to examine the performance of the proposed algorithm compared with that of five selected MOPSOs and two selected multiobjective evolutionary algorithms. In addition, the computational cost of the proposed algorithm is quantified and compared with that of the selected MOPSOs. The proposed algorithm shows competitive results with improved diversity and convergence and demands less computational cost. PMID:18784011

  7. European distributed seismological data archives infrastructure: EIDA

    NASA Astrophysics Data System (ADS)

    Clinton, John; Hanka, Winfried; Mazza, Salvatore; Pederson, Helle; Sleeman, Reinoud; Stammler, Klaus; Strollo, Angelo

    2014-05-01

    The European Integrated waveform Data Archive (EIDA) is a distributed Data Center system within ORFEUS that (a) securely archives seismic waveform data and related metadata gathered by European research infrastructures, and (b) provides transparent access to the archives for the geosciences research communities. EIDA was founded in 2013 by ORFEUS Data Center, GFZ, RESIF, ETH, INGV and BGR to ensure sustainability of a distributed archive system and the implementation of standards (e.g. FDSN StationXML, FDSN webservices) and coordinate new developments. Under the mandate of the ORFEUS Board of Directors and Executive Committee the founding group is responsible for steering and maintaining the technical developments and organization of the European distributed seismic waveform data archive and the integration within broader multidisciplanry frameworks like EPOS. EIDA currently offers uniform data access to unrestricted data from 8 European archives (www.orfeus-eu.org/eida), linked by the Arclink protocol, hosting data from 75 permanent networks (1800+ stations) and 33 temporary networks (1200+) stations). Moreover, each archive may also provide unique, restricted datasets. A webinterface, developed at GFZ, offers interactive access to different catalogues (EMSC, GFZ, USGS) and EIDA waveform data. Clients and toolboxes like arclink_fetch and ObsPy can connect directly to any EIDA node to collect data. Current developments are directed to the implementation of quality parameters and strong motion parameters.

  8. Correlational effect size benchmarks.

    PubMed

    Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A

    2015-03-01

    Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PMID:25314367

  9. Virtual machine performance benchmarking.

    PubMed

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising. PMID:21207096

  10. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  11. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  12. California commercial building energy benchmarking

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the identities of building owners might be revealed and

  13. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  14. Data-Intensive Benchmarking Suite

    Energy Science and Technology Software Center (ESTSC)

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking

  15. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    SciTech Connect

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  16. Benchmarks for GADRAS performance validation.

    SciTech Connect

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L., Jr.

    2009-09-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  17. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  18. Benchmarking: A Method for Continuous Quality Improvement in Health

    PubMed Central

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-01-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  19. The Role of Data Archives in Synoptic Solar Physics

    NASA Astrophysics Data System (ADS)

    Reardon, Kevin

    The detailed study of solar cycle variations requires analysis of recorded datasets spanning many years of observations, that is, a data archive. The use of digital data, combined with powerful database server software, gives such archives new capabilities to provide, quickly and flexibly, selected pieces of information to scientists. Use of standardized protocols will allow multiple databases, independently maintained, to be seamlessly joined, allowing complex searches spanning multiple archives. These data archives also benefit from being developed in parallel with the telescope itself, which helps to assure data integrity and to provide close integration between the telescope and archive. Development of archives that can guarantee long-term data availability and strong compatibility with other projects makes solar-cycle studies easier to plan and realize.

  20. How to avoid 'death by benchmarking'.

    PubMed

    Wofford, Dave; Libby, Darin

    2015-08-01

    Hospitals and health systems should adopt four key principles and practices when applying benchmarks to determine physician compensation: Acknowledge that a lower percentile may be appropriate. Use the median as the all-in benchmark. Use peer benchmarks when available. Use alternative benchmarks. PMID:26548140

  1. Benchmarking for the Learning and Skills Sector.

    ERIC Educational Resources Information Center

    Owen, Jane

    This document is designed to introduce practitioners in the United Kingdom's learning and skills sector to the principles and practice of benchmarking. The first section defines benchmarking and differentiates metric, diagnostic, and process benchmarking. The remainder of the booklet details the following steps of the benchmarking process: (1) get…

  2. NHT-1 I/O Benchmarks

    NASA Technical Reports Server (NTRS)

    Carter, Russell; Ciotti, Bob; Fineberg, Sam; Nitzbert, Bill

    1992-01-01

    The NHT-1 benchmarks am a set of three scalable I/0 benchmarks suitable for evaluating the I/0 subsystems of high performance distributed memory computer systems. The benchmarks test application I/0, maximum sustained disk I/0, and maximum sustained network I/0. Sample codes are available which implement the benchmarks.

  3. My Dream Archive

    ERIC Educational Resources Information Center

    Phelps, Christopher

    2007-01-01

    In this article, the author shares his experience as he traveled from island to island with a single objective--to reach the archives. He found out that not all archives are the same. In recent months, his daydreaming in various facilities has yielded a recurrent question on what would constitute the Ideal Archive. What follows, in no particular…

  4. Researching Television News Archives.

    ERIC Educational Resources Information Center

    Wilhoit, Frances Goins

    To demonstrate the uses and efficiency of major television news archives, a study was conducted to describe major archival programs and to compare the Vanderbilt University Television News Archives and the CBS News Index. Network coverage of an annual news event, the 1983 State of the Union address, is traced through entries in both. The findings…

  5. Soil archives of a Fluvisol: Subsurface analysis and soil history of the medieval city centre of Vlaardingen, the Netherlands - an integral approach

    NASA Astrophysics Data System (ADS)

    Kluiving, Sjoerd; De Ridder, Tim; Van Dasselaar, Marcel; Roozen, Stan; Prins, Maarten; Van Mourik, Jan

    2016-04-01

    In Medieval times the city of Vlaardingen (the Netherlands) was strategically located on the confluence of three rivers, the Meuse, the Merwede and the Vlaarding. A church of early 8th century was already located here. In a short period of time Vlaardingen developed into an international trading place, the most important place in the former county of Holland. Starting from the 11th century the river Meuse threatened to flood the settlement. These floods have been registered in the archives of the fluvisol and were recognised in a multidisciplinary sedimentary analysis of these archives. To secure the future of this vulnerable soil archive currently an extensive interdisciplinary research (76 mechanical drill holes, grain size analysis (GSA), thermo-gravimetric analysis (TGA), archaeological remains, soil analysis, dating methods, micromorphology, and microfauna has started in 2011 to gain knowledge on the sedimentological and pedological subsurface of the mound as well as on the well-preserved nature of the archaeological evidence. Pedogenic features are recorded with soil descriptive, micromorphological and geochemical (XRF) analysis. The soil sequence of 5 meters thickness exhibits a complex mix of 'natural' as well as 'anthropogenic layering' and initial soil formation that enables to make a distinction for relatively stable periods between periods with active sedimentation. In this paper the results of this large-scale project are demonstrated in a number of cross-sections with interrelated geological, pedological and archaeological stratification. Distinction between natural and anthropogenic layering is made on the occurrence of chemical elements phosphor and potassium. A series of four stratigraphic / sedimentary units record the period before and after the flooding disaster. Given the many archaeological remnants and features present in the lower units, we assume that the medieval landscape was drowned while it was inhabited in the 12th century AD. After a

  6. Web-based medical image archive system

    NASA Astrophysics Data System (ADS)

    Suh, Edward B.; Warach, Steven; Cheung, Huey; Wang, Shaohua A.; Tangiral, Phanidral; Luby, Marie; Martino, Robert L.

    2002-05-01

    This paper presents a Web-based medical image archive system in three-tier, client-server architecture for the storage and retrieval of medical image data, as well as patient information and clinical data. The Web-based medical image archive system was designed to meet the need of the National Institute of Neurological Disorders and Stroke for a central image repository to address questions of stroke pathophysiology and imaging biomarkers in stroke clinical trials by analyzing images obtained from a large number of clinical trials conducted by government, academic and pharmaceutical industry researchers. In the database management-tier, we designed the image storage hierarchy to accommodate large binary image data files that the database software can access in parallel. In the middle-tier, a commercial Enterprise Java Bean server and secure Web server manages user access to the image database system. User-friendly Web-interfaces and applet tools are provided in the client-tier for easy access to the image archive system over the Internet. Benchmark test results show that our three-tier image archive system yields fast system response time for uploading, downloading, and querying the image database.

  7. ESA Science Archives and associated VO activities

    NASA Astrophysics Data System (ADS)

    Arviset, Christophe; Baines, Deborah; Barbarisi, Isa; Castellanos, Javier; Cheek, Neil; Costa, Hugo; Fajersztejn, Nicolas; Gonzalez, Juan; Fernandez, Monica; Laruelo, Andrea; Leon, Ignacio; Ortiz, Inaki; Osuna, Pedro; Salgado, Jesus; Tapiador, Daniel

    ESA's European Space Astronomy Centre (ESAC), near Madrid, Spain, hosts most of ESA space based missions' scientific archives, in planetary (Mars Express, Venus Express, Rosetta, Huygens, Giotto, Smart-1, all in ESA Planetary Science Archive), in astronomy (XMM-Newton, Herschel, ISO, Integral, Exosat, Planck) and in solar physics (Soho). All these science archives are operated by a dedicated Science Archives and Virtual Observatory Team (SAT) at ESAC, enabling common and efficient design, development, operations and maintenance of the archives software systems. This also ensures long term preservation and availability of such science archives, as a sustainable service to the science community. ESA space science data can be accessed through powerful and user friendly user interface, as well as from machine scriptable interface and through VO interfaces. Virtual Observatory activities are also fully part of ESA archiving strategy and ESA is a very ac-tive partner in VO initiatives in Europe through Euro-VO AIDA and EuroPlanet and worldwide through the IVOA (International Virtual Observatory Alliance) and the IPDA (International Planetary Data Alliance).

  8. Closed-Loop Neuromorphic Benchmarks.

    PubMed

    Stewart, Terrence C; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of "minimal" simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  9. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  10. Introduction: Consider the Archive.

    PubMed

    Yale, Elizabeth

    2016-03-01

    In recent years, historians of archives have paid increasingly careful attention to the development of state, colonial, religious, and corporate archives in the early modern period, arguing that power (of various kinds) was mediated and extended through material writing practices in and around archives. The history of early modern science, likewise, has tracked the production of scientific knowledge through the inscription and circulation of written records within and between laboratories, libraries, homes, and public spaces, such as coffeehouses and bookshops. This Focus section interrogates these two bodies of scholarship against each other. The contributors ask how archival digitization is transforming historical practice; how awareness of archival histories can help us to reconceptualize our work as historians of science; how an archive's layered purposes, built up over centuries of record keeping, can shape the historical narratives we write; and how scientific knowledge emerging from archives gained authority and authenticity. PMID:27197412

  11. Archive and records management-Fiscal year 2010 offline archive media trade study

    USGS Publications Warehouse

    Bodoh, Tom; Boettcher, Ken; Gacke, Ken; Greenhagen, Cheryl; Engelbrecht, Al

    2010-01-01

    This document is a trade study comparing offline digital archive storage technologies. The document compares and assesses several technologies and recommends which technologies could be deployed as the next generation standard for the U.S. Geological Survey (USGS). Archives must regularly migrate to the next generation of digital archive technology, and the technology selected must maintain data integrity until the next migration. This document is the fiscal year 2010 (FY10) revision of a study completed in FY01 and revised in FY03, FY04, FY06, and FY08.

  12. Soil archives of a Fluvisol: subsurface analysis and soil history of the medieval city centre of Vlaardingen, the Netherlands - an integral approach

    NASA Astrophysics Data System (ADS)

    Kluiving, Sjoerd; de Ridder, Tim; van Dasselaar, Marcel; Roozen, Stan; Prins, Maarten

    2016-06-01

    The medieval city of Vlaardingen (the Netherlands) was strategically located on the confluence of three rivers, the Maas, the Merwede, and the Vlaarding. A church of the early 8th century AD was already located here. In a short period of time, Vlaardingen developed in the 11th century AD into an international trading place and into one of the most important places in the former county of Holland. Starting from the 11th century AD, the river Maas repeatedly threatened to flood the settlement. The flood dynamics were registered in Fluvisol archives and were recognised in a multidisciplinary sedimentary analysis of these archives. To secure the future of these vulnerable soil archives an extensive interdisciplinary research effort (76 mechanical drill holes, grain size analysis (GSA), thermo-gravimetric analysis (TGA), archaeological remains, soil analysis, dating methods, micromorphology, and microfauna) started in 2011 to gain knowledge on the sedimentological and pedological subsurface of the settlement mound as well as on the well-preserved nature of the archaeological evidence. Pedogenic features are recorded with soil description, micromorphological, and geochemical (XRF - X-ray fluorescence) analysis. The soil sequence of 5 m thickness exhibits a complex mix of "natural" as well as "anthropogenic" layering and initial soil formation that enables us to make a distinction between relatively stable periods and periods with active sedimentation. In this paper the results of this interdisciplinary project are demonstrated in a number of cross-sections with interrelated geological, pedological, and archaeological stratification. A distinction between natural and anthropogenic layering is made on the basis of the occurrence of the chemical elements phosphor and potassium. A series of four stratigraphic and sedimentary units record the period before and after the flooding disaster. Given the many archaeological remnants and features present in the lower units, in

  13. Radiation Detection Computational Benchmark Scenarios

    SciTech Connect

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  14. VizieR Online Data Catalog: Gaia FGK benchmark stars: abundances (Jofre+, 2015)

    NASA Astrophysics Data System (ADS)

    Jofre, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Masseron, T.; Nordlander, T.; Chemin, L.; Worley, C. C.; van Eck, S.; Hourihane, A.; Gilmore, G.; Adibekyan, V.; Bergemann, M.; Cantat-Gaudin, T.; Delgado-Mena, E.; Gonzalez Hernandez, J. I.; Guiglion, G.; Lardo, C.; de Laverny, P.; Lind, K.; Magrini, L.; Mikolaitis, S.; Montes, D.; Pancino, E.; Recio-Blanco, A.; Sordo, R.; Sousa, S.; Tabernero, H. M.; Vallenari, A.

    2015-07-01

    As in our previous work on the subject, we built a library of high- resolution spectra of the GBS, using our own observations on the NARVAL spectrograph at Pic du Midi in addition to archived data. The abundance of alpha and iron peak elements of the Gaia FGK benchmark stars is determined by combining 8 methods. The Tables indicate the elemental abundances determined for each star, element, line and method. (36 data files).

  15. Developing Financial Benchmarks for Critical Access Hospitals

    PubMed Central

    Pink, George H.; Holmes, George M.; Slifkin, Rebecca T.; Thompson, Roger E.

    2009-01-01

    This study developed and applied benchmarks for five indicators included in the CAH Financial Indicators Report, an annual, hospital-specific report distributed to all critical access hospitals (CAHs). An online survey of Chief Executive Officers and Chief Financial Officers was used to establish benchmarks. Indicator values for 2004, 2005, and 2006 were calculated for 421 CAHs and hospital performance was compared to the benchmarks. Although many hospitals performed better than benchmark on one indicator in 1 year, very few performed better than benchmark on all five indicators in all 3 years. The probability of performing better than benchmark differed among peer groups. PMID:19544935

  16. Benchmarking the Sandia Pulsed Reactor III cavity neutron spectrum for electronic parts calibration and testing

    SciTech Connect

    Kelly, J.G.; Griffin, P.J.; Fan, W.C.

    1993-08-01

    The SPR III bare cavity spectrum and integral parameters have been determined with 24 measured spectrum sensor responses and an independent, detailed, MCNP transport calculation. This environment qualifies as a benchmark field for electronic parts testing.

  17. Benchmark specifications for EBR-II shutdown heat removal tests

    SciTech Connect

    Sofu, T.; Briggs, L. L.

    2012-07-01

    Argonne National Laboratory (ANL) is hosting an IAEA-coordinated research project on benchmark analyses of sodium-cooled fast reactor passive safety tests performed at the Experimental Breeder Reactor-II (EBR-II). The benchmark project involves analysis of a protected and an unprotected loss of flow tests conducted during an extensive testing program within the framework of the U.S. Integral Fast Reactor program to demonstrate the inherently safety features of EBR-II as a pool-type, sodium-cooled fast reactor prototype. The project is intended to improve the participants' design and safety analysis capabilities for sodium-cooled fast reactors through validation and qualification of safety analysis codes and methods. This paper provides a description of the EBR-II tests included in the program, and outlines the benchmark specifications being prepared to support the IAEA-coordinated research project. (authors)

  18. Sustainable value assessment of farms using frontier efficiency benchmarks.

    PubMed

    Van Passel, Steven; Van Huylenbroeck, Guido; Lauwers, Ludwig; Mathijs, Erik

    2009-07-01

    Appropriate assessment of firm sustainability facilitates actor-driven processes towards sustainable development. The methodology in this paper builds further on two proven methodologies for the assessment of sustainability performance: it combines the sustainable value approach with frontier efficiency benchmarks. The sustainable value methodology tries to relate firm performance to the use of different resources. This approach assesses contributions to corporate sustainability by comparing firm resource productivity with the resource productivity of a benchmark, and this for all resources considered. The efficiency is calculated by estimating the production frontier indicating the maximum feasible production possibilities. In this research, the sustainable value approach is combined with efficiency analysis methods to benchmark sustainability assessment. In this way, the production theoretical underpinnings of efficiency analysis enrich the sustainable value approach. The methodology is presented using two different functional forms: the Cobb-Douglas and the translog functional forms. The simplicity of the Cobb-Douglas functional form as benchmark is very attractive but it lacks flexibility. The translog functional form is more flexible but has the disadvantage that it requires a lot of data to avoid estimation problems. Using frontier methods for deriving firm specific benchmarks has the advantage that the particular situation of each company is taken into account when assessing sustainability. Finally, we showed that the methodology can be used as an integrative sustainability assessment tool for policy measures. PMID:19553001

  19. Austin Community College Benchmarking Update.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX. Office of Institutional Effectiveness.

    Austin Community College contracted with MGT of America, Inc. in spring 1999 to develop a peer and benchmark (best) practices analysis on key indicators. These indicators were updated in spring 2002 using data from eight Texas community colleges and four non-Texas institutions that represent large, comprehensive, urban community colleges, similar…

  20. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    SciTech Connect

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  1. Real-Time Benchmark Suite

    Energy Science and Technology Software Center (ESTSC)

    1992-01-17

    This software provides a portable benchmark suite for real time kernels. It tests the performance of many of the system calls, as well as the interrupt response time and task response time to interrupts. These numbers provide a baseline for comparing various real-time kernels and hardware platforms.

  2. Processor Emulator with Benchmark Applications

    SciTech Connect

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  3. PyMPI Dynamic Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2007-02-16

    Pynamic is a benchmark designed to test a system's ability to handle the Dynamic Linking and Loading (DLL) requirements of Python-based scientific applications. This benchmark is developed to add a workload to our testing environment, a workload that represents a newly emerging class of DLL behaviors. Pynamic buildins on pyMPI, and MPI extension to Python C-extension dummy codes and a glue layer that facilitates linking and loading of the generated dynamic modules into the resultingmore » pyMPI. Pynamic is configurable, enabling modeling the static properties of a specific code as described in section 5. It does not, however, model any significant computationss of the target and hence, it is not subjected to the same level of control as the target code. In fact, HPC computer vendors and tool developers will be encouraged to add it to their tesitn suite once the code release is completed. an ability to produce and run this benchmark is an effective test for valifating the capability of a compiler and linker/loader as well as an OS kernel and other runtime system of HPC computer vendors. In addition, the benchmark is designed as a test case for stressing code development tools. Though Python has recently gained popularity in the HPC community, it heavy DLL operations have hindered certain HPC code development tools, notably parallel debuggers, from performing optimally.« less

  4. Benchmarking short sequence mapping tools

    PubMed Central

    2013-01-01

    Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764

  5. The GTC Public Archive

    NASA Astrophysics Data System (ADS)

    Alacid, J. Manuel; Solano, Enrique

    2015-12-01

    The Gran Telescopio Canarias (GTC) archive is operational since November 2011. The archive, maintained by the Data Archive Unit at CAB in the framework of the Spanish Virtual Observatory project, provides access to both raw and science ready data and has been designed in compliance with the standards defined by the International Virtual Observatory Alliance (IVOA) to guarantee a high level of data accessibility and handling. In this presentation I will describe the main capabilities the GTC archive offers to the community, in terms of functionalities and data collections, to carry out an efficient scientific exploitation of GTC data.

  6. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Sediment-Associated Biota

    SciTech Connect

    Hull, R.N.

    1993-01-01

    A hazardous waste site may contain hundreds of chemicals; therefore, it is important to screen contaminants of potential concern for the ecological risk assessment. Often this screening is done as part of a screening assessment, the purpose of which is to evaluate the available data, identify data gaps, and screen contaminants of potential concern. Screening may be accomplished by using a set of toxicological benchmarks. These benchmarks are helpful in determining whether contaminants warrant further assessment or are at a level that requires no further attention. If a chemical concentration or the reported detection limit exceeds a proposed lower benchmark, further analysis is needed to determine the hazards posed by that chemical. If, however, the chemical concentration falls below the lower benchmark value, the chemical may be eliminated from further study. The use of multiple benchmarks is recommended for screening chemicals of concern in sediments. Integrative benchmarks developed for the National Oceanic and Atmospheric Administration and the Florida Department of Environmental Protection are included for inorganic and organic chemicals. Equilibrium partitioning benchmarks are included for screening nonionic organic chemicals. Freshwater sediment effect concentrations developed as part of the U.S. Environmental Protection Agency's (EPA's) Assessment and Remediation of Contaminated Sediment Project are included for inorganic and organic chemicals (EPA 1996). Field survey benchmarks developed for the Ontario Ministry of the Environment are included for inorganic and organic chemicals. In addition, EPA-proposed sediment quality criteria are included along with screening values from EPA Region IV and Ecotox Threshold values from the EPA Office of Solid Waste and Emergency Response. Pore water analysis is recommended for ionic organic compounds; comparisons are then made against water quality benchmarks. This report is an update of three prior reports (Jones et al

  7. The Growth of Benchmarking in Higher Education.

    ERIC Educational Resources Information Center

    Schofield, Allan

    2000-01-01

    Benchmarking is used in higher education to improve performance by comparison with other institutions. Types used include internal, external competitive, external collaborative, external transindustry, and implicit. Methods include ideal type (or gold) standard, activity-based benchmarking, vertical and horizontal benchmarking, and comparative…

  8. How Benchmarking and Higher Education Came Together

    ERIC Educational Resources Information Center

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  9. Testing (Validating?) Cross Sections with ICSBEP Benchmarks

    SciTech Connect

    Kahler, Albert C. III

    2012-06-28

    We discuss how to use critical benchmarks from the International Handbook of Evaluated Criticality Safety Benchmark Experiments to determine the applicability of specific cross sections to the end-user's problem of interest. Particular attention is paid to making sure the selected suite of benchmarks includes the user's range of applicability (ROA).

  10. Gaia FGK benchmark stars: new candidates at low metallicities

    NASA Astrophysics Data System (ADS)

    Hawkins, K.; Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Casagrande, L.; Gilmore, G.; Lind, K.; Magrini, L.; Masseron, T.; Pancino, E.; Randich, S.; Worley, C. C.

    2016-07-01

    Context. We have entered an era of large spectroscopic surveys in which we can measure, through automated pipelines, the atmospheric parameters and chemical abundances for large numbers of stars. Calibrating these survey pipelines using a set of "benchmark stars" in order to evaluate the accuracy and precision of the provided parameters and abundances is of utmost importance. The recent proposed set of Gaia FGK benchmark stars has up to five metal-poor stars but no recommended stars within -2.0 < [Fe/H] < -1.0 dex. However, this metallicity regime is critical to calibrate properly. Aims: In this paper, we aim to add candidate Gaia benchmark stars inside of this metal-poor gap. We began with a sample of 21 metal-poor stars which was reduced to 10 stars by requiring accurate photometry and parallaxes, and high-resolution archival spectra. Methods: The procedure used to determine the stellar parameters was similar to the previous works in this series for consistency. The difference was to homogeneously determine the angular diameter and effective temperature (Teff) of all of our stars using the Infrared Flux Method utilizing multi-band photometry. The surface gravity (log g) was determined through fitting stellar evolutionary tracks. The [Fe/H] was determined using four different spectroscopic methods fixing the Teff and log g from the values determined independent of spectroscopy. Results: We discuss, star-by-star, the quality of each parameter including how it compares to literature, how it compares to a spectroscopic run where all parameters are free, and whether Fe i ionisation-excitation balance is achieved. Conclusions: From the 10 stars, we recommend a sample of five new metal-poor benchmark candidate stars which have consistent Teff, log g, and [Fe/H] determined through several means. These stars, which are within -1.3 < [Fe/H] < -1.0, can be used for calibration and validation purpose of stellar parameter and abundance pipelines and should be of highest

  11. Benchmarking neuromorphic systems with Nengo

    PubMed Central

    Bekolay, Trevor; Stewart, Terrence C.; Eliasmith, Chris

    2015-01-01

    Nengo is a software package for designing and simulating large-scale neural models. Nengo is architected such that the same Nengo model can be simulated on any of several Nengo backends with few to no modifications. Backends translate a model to specific platforms, which include GPUs and neuromorphic hardware. Nengo also contains a large test suite that can be run with any backend and focuses primarily on functional performance. We propose that Nengo's large test suite can be used to benchmark neuromorphic hardware's functional performance and simulation speed in an efficient, unbiased, and future-proof manner. We implement four benchmark models and show that Nengo can collect metrics across five different backends that identify situations in which some backends perform more accurately or quickly. PMID:26539076

  12. Geothermal Heat Pump Benchmarking Report

    SciTech Connect

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  13. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  14. Cancer imaging archive available

    Cancer.gov

    NCI’s Cancer Imaging Program has inaugurated The Cancer Imaging Archive (TCIA), a web-accessible and unique clinical imaging archive linked to The Cancer Genome Atlas (TCGA) tissue repository. It contains a large proportion of original, pre-surgical MRIs from cases that have been genomically characterized in TCGA.

  15. [Church Archives; Selected Papers.

    ERIC Educational Resources Information Center

    Abraham, Terry; And Others

    Papers presented at the Institute which were concerned with keeping of church archives are entitled: "St. Mary's Episcopal Church, Eugene, Oregon;""Central Lutheran Church, Eugene, Oregon: A History;""Mormon Church Archives: An Overview;""Sacramental Records of St. Mary's Catholic Church, Eugene, Oregon;""Chronology of St. Mary's Catholic Church,…

  16. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  17. MPI Multicore Torus Communication Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2008-02-05

    The MPI Multicore Torus Communications Benchmark (TorusTest) measues the aggegate bandwidth across all six links from/to any multicore node in a logical torus. It can run in wo modi: using a static or a random mapping of tasks to torus locations. The former can be used to achieve optimal mappings and aggregate bandwidths that can be achieved with varying node mappings.

  18. Restaurant Energy Use Benchmarking Guideline

    SciTech Connect

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  19. RISKIND verification and benchmark comparisons

    SciTech Connect

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  20. An introduction to benchmarking in healthcare.

    PubMed

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related. PMID:10139084

  1. An Evolutionary Algorithm with Double-Level Archives for Multiobjective Optimization.

    PubMed

    Chen, Ni; Chen, Wei-Neng; Gong, Yue-Jiao; Zhan, Zhi-Hui; Zhang, Jun; Li, Yun; Tan, Yu-Song

    2015-09-01

    Existing multiobjective evolutionary algorithms (MOEAs) tackle a multiobjective problem either as a whole or as several decomposed single-objective sub-problems. Though the problem decomposition approach generally converges faster through optimizing all the sub-problems simultaneously, there are two issues not fully addressed, i.e., distribution of solutions often depends on a priori problem decomposition, and the lack of population diversity among sub-problems. In this paper, a MOEA with double-level archives is developed. The algorithm takes advantages of both the multiobjective-problem-level and the sub-problem-level approaches by introducing two types of archives, i.e., the global archive and the sub-archive. In each generation, self-reproduction with the global archive and cross-reproduction between the global archive and sub-archives both breed new individuals. The global archive and sub-archives communicate through cross-reproduction, and are updated using the reproduced individuals. Such a framework thus retains fast convergence, and at the same time handles solution distribution along Pareto front (PF) with scalability. To test the performance of the proposed algorithm, experiments are conducted on both the widely used benchmarks and a set of truly disconnected problems. The results verify that, compared with state-of-the-art MOEAs, the proposed algorithm offers competitive advantages in distance to the PF, solution coverage, and search speed. PMID:25343775

  2. The NAS Computational Aerosciences Archive

    NASA Technical Reports Server (NTRS)

    Miceli, Kristina D.; Globus, Al; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    performed by NAS staff members and collaborators. Visual results, which may be available in the form of images, animations, and/or visualization scripts, are generated by researchers with respect to a certain research project, depicting dataset features that were determined important by the investigating researcher. For example, script files for visualization systems (e.g. FAST, PLOT3D, AVS) are provided to create visualizations on the user's local workstation to elucidate the key points of the numerical study. Users can then interact with the data starting where the investigator left off. Datasets are intended to give researchers an opportunity to understand previous work, 'mine' solutions for new information (for example, have you ever read a paper thinking "I wonder what the helicity density looks like?"), compare new techniques with older results, collaborate with remote colleagues, and perform validation. Supporting meta-information associated with the research projects is also important to provide additional context for research projects. This may include information such as the software used in the simulation (e.g. grid generators, flow solvers, visualization). In addition to serving the CAS research community, the information archive will also be helpful to students, visualization system developers and researchers, and management. Students (of any age) can use the data to study fluid dynamics, compare results from different flow solvers, learn about meshing techniques, etc., leading to better informed individuals. For these users it is particularly important that visualization be integrated into dataset archives. Visualization researchers can use dataset archives to test algorithms and techniques, leading to better visualization systems, Management can use the data to figure what is really going on behind the viewgraphs. All users will benefit from fast, easy, and convenient access to CFD datasets. The CAS information archive hopes to serve as a useful resource to

  3. HyspIRI Low Latency Concept and Benchmarks

    NASA Technical Reports Server (NTRS)

    Mandl, Dan

    2010-01-01

    Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.

  4. Benchmark calculations of thermal reaction rates. I - Quantal scattering theory

    NASA Technical Reports Server (NTRS)

    Chatfield, David C.; Truhlar, Donald G.; Schwenke, David W.

    1991-01-01

    The thermal rate coefficient for the prototype reaction H + H2 yields H2 + H with zero total angular momentum is calculated by summing, averaging, and numerically integrating state-to-state reaction probabilities calculated by time-independent quantum-mechanical scattering theory. The results are very carefully converged with respect to all numerical parameters in order to provide high-precision benchmark results for confirming the accuracy of new methods and testing their efficiency.

  5. O3-DPACS Open-Source Image-Data Manager/Archiver and HDW2 Image-Data Display: an IHE-compliant project pushing the e-health integration in the world.

    PubMed

    Inchingolo, Paolo; Beltrame, Marco; Bosazzi, Pierpaolo; Cicuta, Davide; Faustini, Giorgio; Mininel, Stefano; Poli, Andrea; Vatta, Federica

    2006-01-01

    After many years of study, development and experimentation of open PACS and Image workstation solutions including management of medical data and signals (DPACS project), the research and development at the University of Trieste have recently been directed towards Java-based, IHE compliant and multi-purpose servers and clients. In this paper an original Image-Data Manager/Archiver (O3-DPACS) and a universal Image-Data Display (HDW2) are described. O3-DPACS is also part of a new project called Open Three (O3) Consortium, promoting Open Source adoption in e-health at European and world-wide levels. This project aims to give a contribution to the development of e-health through the study of Healthcare Information Systems and the contemporary proposal of new concepts, designs and solutions for the management of health data in an integrated environment: hospitals, Regional Health Information Organizations and citizens (home-care, mobile-care and ambient assisted living). PMID:17055700

  6. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF

    PubMed Central

    Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan

    2016-01-01

    With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101

  7. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF.

    PubMed

    Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan

    2016-01-01

    With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101

  8. The Golosyiv plate archive digitisation

    NASA Astrophysics Data System (ADS)

    Sergeeva, T. P.; Sergeev, A. V.; Pakuliak, L. K.; Yatsenko, A. I.

    2007-08-01

    The plate archive of the Main Astronomical Observatory of the National Academy of Sciences of Ukraine (Golosyiv, Kyiv) includes about 85 000 plates which have been taken in various observational projects during 1950-2005. Among them are about 25 000 of direct northern sky area plates and more than 600 000 plates containing stellar, planetary and active solar formations spectra. Direct plates have a limiting magnitude of 14.0-16.0 mag. Since 2002 we have been organising the storage, safeguarding, cataloguing and digitization of the plate archive. The very initial task was to create the automated system for detection of astronomical objects and phenomena, search of optical counterparts in the directions of gamma-ray bursts, research of long period, flare and other variable stars, search and rediscovery of asteroids, comets and other Solar System bodies to improve the elements of their orbits, informational support of CCD observations and space projects, etc. To provide higher efficiency of this work we have prepared computer readable catalogues and database for 250 000 direct wide field plates. Now the catalogues have been adapted to Wide Field Plate Database (WFPDB) format and integrated into this world database. The next step will be adaptation of our catalogues, database and images to standards of the IVOA. Some magnitude and positional accuracy estimations for Golosyiv archive plates have been done. The photometric characteristics of the images of NGC 6913 cluster stars on two plates of the Golosyiv's double wide angle astrograph have been determined. Very good conformity of the photometric characteristics obtained with external accuracies of 0.13 and 0.15 mag. has been found. The investigation of positional accuracy have been made with A3± format fixed bed scanner (Microtek ScanMaker 9800XL TMA). It shows that the scanner has non-detectable systematic errors on the X-axis, and errors of ± 15 μm on the Y-axis. The final positional errors are about ± 2 μm (

  9. Databases and Archiving for CryoEM.

    PubMed

    Patwardhan, A; Lawson, C L

    2016-01-01

    CryoEM in structural biology is currently served by three public archives-EMDB for 3DEM reconstructions, PDB for models built from 3DEM reconstructions, and EMPIAR for the raw 2D image data used to obtain the 3DEM reconstructions. These archives play a vital role for both the structural community and the wider biological community in making the data accessible so that results may be reused, reassessed, and integrated with other structural and bioinformatics resources. The important role of the archives is underpinned by the fact that many journals mandate the deposition of data to PDB and EMDB on publication. The field is currently undergoing transformative changes where on the one hand high-resolution structures are becoming a routine occurrence while on the other hand electron tomography is enabling the study of macromolecules in the cellular context. Concomitantly the archives are evolving to best serve their stakeholder communities. In this chapter, we describe the current state of the archives, resources available for depositing, accessing, searching, visualizing and validating data, on-going community-wide initiatives and opportunities, and challenges for the future. PMID:27572735

  10. The Herschel Science Archive

    NASA Astrophysics Data System (ADS)

    Verdugo, Eva

    2015-12-01

    The Herschel mission required a Science Archive able to serve data to very different users: The own Data Analysis Software (both Pipeline and Interactive Analysis), the consortia of the different instruments and the scientific community. At the same time, the KP consortia were committed to deliver to the Herschel Science Centre, the processed products corresponding to the data obtained as part of their Science Demonstration Phase and the Herschel Archive should include the capability to store and deliver them. I will explain how the current Herschel Science Archive is designed to cover all these requirements.

  11. Benchmarking and testing the "Sea Level Equation

    NASA Astrophysics Data System (ADS)

    Spada, G.; Barletta, V. R.; Klemann, V.; van der Wal, W.; James, T. S.; Simon, K.; Riva, R. E. M.; Martinec, Z.; Gasperini, P.; Lund, B.; Wolf, D.; Vermeersen, L. L. A.; King, M. A.

    2012-04-01

    The study of the process of Glacial Isostatic Adjustment (GIA) and of the consequent sea level variations is gaining an increasingly important role within the geophysical community. Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying GIA can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, we do not have a suitably large set of agreed numerical results through which the methods may be validated. Following the example of the mantle convection community and our recent successful Benchmark for Post Glacial Rebound codes (Spada et al., 2011, doi: 10.1111/j.1365-246X.2011.04952.x), here we present the results of a benchmark study of independently developed codes designed to solve the SLE. This study has taken place within a collaboration facilitated through the European Cooperation in Science and Technology (COST) Action ES0701. The tests involve predictions of past and current sea level variations, and 3D deformations of the Earth surface. In spite of the signi?cant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found, which can be often attributed to the different numerical algorithms employed within the community, help to constrain the intrinsic errors in model predictions. These are of fundamental importance for a correct interpretation of the geodetic variations observed today, and

  12. Gaia FGK benchmark stars: Metallicity

    NASA Astrophysics Data System (ADS)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  13. Impact of a computer-aided detection (CAD) system integrated into a picture archiving and communication system (PACS) on reader sensitivity and efficiency for the detection of lung nodules in thoracic CT exams.

    PubMed

    Bogoni, Luca; Ko, Jane P; Alpert, Jeffrey; Anand, Vikram; Fantauzzi, John; Florin, Charles H; Koo, Chi Wan; Mason, Derek; Rom, William; Shiau, Maria; Salganicoff, Marcos; Naidich, David P

    2012-12-01

    The objective of this study is to assess the impact on nodule detection and efficiency using a computer-aided detection (CAD) device seamlessly integrated into a commercially available picture archiving and communication system (PACS). Forty-eight consecutive low-dose thoracic computed tomography studies were retrospectively included from an ongoing multi-institutional screening study. CAD results were sent to PACS as a separate image series for each study. Five fellowship-trained thoracic radiologists interpreted each case first on contiguous 5 mm sections, then evaluated the CAD output series (with CAD marks on corresponding axial sections). The standard of reference was based on three-reader agreement with expert adjudication. The time to interpret CAD marking was automatically recorded. A total of 134 true-positive nodules, measuring 3 mm and larger were included in our study; with 85 ≥ 4 and 50 ≥ 5 mm in size. Readers detection improved significantly in each size category when using CAD, respectively, from 44 to 57 % for ≥3 mm, 48 to 61 % for ≥4 mm, and 44 to 60 % for ≥5 mm. CAD stand-alone sensitivity was 65, 68, and 66 % for nodules ≥3, ≥4, and ≥5 mm, respectively, with CAD significantly increasing the false positives for two readers only. The average time to interpret and annotate a CAD mark was 15.1 s, after localizing it in the original image series. The integration of CAD into PACS increases reader sensitivity with minimal impact on interpretation time and supports such implementation into daily clinical practice. PMID:22710985

  14. Latin American Archives.

    ERIC Educational Resources Information Center

    Belsunce, Cesar A. Garcia

    1983-01-01

    Examination of the situation of archives in four Latin American countries--Argentina, Brazil, Colombia, and Costa Rica--highlights national systems, buildings, staff, processing of documents, accessibility and services to the public and publications and extension services. (EJS)

  15. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  16. Benchmarking for Excellence and the Nursing Process

    NASA Technical Reports Server (NTRS)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  17. Pynamic: the Python Dynamic Benchmark

    SciTech Connect

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  18. TRENDS: Compendium of Benchmark Objects

    NASA Astrophysics Data System (ADS)

    Gonzales, Erica J.; Crepp, Justin R.; Bechter, Eric; Johnson, John A.; Montet, Benjamin T.; Howard, Andrew; Marcy, Geoffrey W.; Isaacson, Howard T.

    2016-01-01

    The physical properties of faint stellar and substellar objects are highly uncertain. For example, the masses of brown dwarfs are usually inferred using theoretical models, which are age dependent and have yet to be properly tested. With the goal of identifying new benchmark objects through observations with NIRC2 at Keck, we have carried out a comprehensive adaptive-optics survey as part of the TRENDS (TaRgetting bENchmark-objects with Doppler Spectroscopy) high-contrast imaging program. TRENDS targets nearby (d < 100 pc), Sun-like stars showing long-term radial velocity accelerations. We present the discovery of 28 confirmed, co-moving companions as well as 19 strong candidate companions to F-, G-, and K-stars with well-determined parallaxes and metallicities. Benchmark objects of this nature lend themselves to a three dimensional orbit determination that will ultimately yield a precise dynamical mass. Unambiguous mass measurements of very low mass companions, which straddle the hydrogen-burning boundary, will allow our compendium of objects to serve as excellent testbeds to substantiate theoretical evolutionary and atmospheric models in regimes where they currently breakdown (low temperature, low mass, and old age).

  19. Benchmarking image fusion algorithm performance

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2012-06-01

    Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.

  20. Characterizing universal gate sets via dihedral benchmarking

    NASA Astrophysics Data System (ADS)

    Carignan-Dugas, Arnaud; Wallman, Joel J.; Emerson, Joseph

    2015-12-01

    We describe a practical experimental protocol for robustly characterizing the error rates of non-Clifford gates associated with dihedral groups, including small single-qubit rotations. Our dihedral benchmarking protocol is a generalization of randomized benchmarking that relaxes the usual unitary 2-design condition. Combining this protocol with existing randomized benchmarking schemes enables practical universal gate sets for quantum information processing to be characterized in a way that is robust against state-preparation and measurement errors. In particular, our protocol enables direct benchmarking of the π /8 gate even under the gate-dependent error model that is expected in leading approaches to fault-tolerant quantum computation.

  1. Benchmarks for acute stroke care delivery

    PubMed Central

    Hall, Ruth E.; Khan, Ferhana; Bayley, Mark T.; Asllani, Eriola; Lindsay, Patrice; Hill, Michael D.; O'Callaghan, Christina; Silver, Frank L.; Kapral, Moira K.

    2013-01-01

    Objective Despite widespread interest in many jurisdictions in monitoring and improving the quality of stroke care delivery, benchmarks for most stroke performance indicators have not been established. The objective of this study was to develop data-derived benchmarks for acute stroke quality indicators. Design Nine key acute stroke quality indicators were selected from the Canadian Stroke Best Practice Performance Measures Manual. Participants A population-based retrospective sample of patients discharged from 142 hospitals in Ontario, Canada, between 1 April 2008 and 31 March 2009 (N = 3191) was used to calculate hospital rates of performance and benchmarks. Intervention The Achievable Benchmark of Care (ABC™) methodology was used to create benchmarks based on the performance of the upper 15% of patients in the top-performing hospitals. Main Outcome Measures Benchmarks were calculated for rates of neuroimaging, carotid imaging, stroke unit admission, dysphasia screening and administration of stroke-related medications. Results The following benchmarks were derived: neuroimaging within 24 h, 98%; admission to a stroke unit, 77%; thrombolysis among patients arriving within 2.5 h, 59%; carotid imaging, 93%; dysphagia screening, 88%; antithrombotic therapy, 98%; anticoagulation for atrial fibrillation, 94%; antihypertensive therapy, 92% and lipid-lowering therapy, 77%. ABC™ acute stroke care benchmarks achieve or exceed the consensus-based targets required by Accreditation Canada, with the exception of dysphagia screening. Conclusions Benchmarks for nine hospital-based acute stroke care quality indicators have been established. These can be used in the development of standards for quality improvement initiatives. PMID:24141011

  2. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  3. Cassini Archive Tracking System

    NASA Technical Reports Server (NTRS)

    Conner, Diane; Sayfi, Elias; Tinio, Adrian

    2006-01-01

    The Cassini Archive Tracking System (CATS) is a computer program that enables tracking of scientific data transfers from originators to the Planetary Data System (PDS) archives. Without CATS, there is no systematic means of locating products in the archive process or ensuring their completeness. By keeping a database of transfer communications and status, CATS enables the Cassini Project and the PDS to efficiently and accurately report on archive status. More importantly, problem areas are easily identified through customized reports that can be generated on the fly from any Web-enabled computer. A Web-browser interface and clearly defined authorization scheme provide safe distributed access to the system, where users can perform functions such as create customized reports, record a transfer, and respond to a transfer. CATS ensures that Cassini provides complete science archives to the PDS on schedule and that those archives are available to the science community by the PDS. The three-tier architecture is loosely coupled and designed for simple adaptation to multimission use. Written in the Java programming language, it is portable and can be run on any Java-enabled Web server.

  4. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    ERIC Educational Resources Information Center

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  5. The GIRAFFE Archive: 1D and 3D Spectra

    NASA Astrophysics Data System (ADS)

    Royer, F.; Jégouzo, I.; Tajahmady, F.; Normand, J.; Chilingarian, I.

    2013-10-01

    The GIRAFFE Archive (http://giraffe-archive.obspm.fr) contains the reduced spectra observed with the intermediate and high resolution multi-fiber spectrograph installed at VLT/UT2 (ESO). In its multi-object configuration and the different integral field unit configurations, GIRAFFE produces 1D spectra and 3D spectra. We present here the status of the archive and the different functionalities to select and download both 1D and 3D data products, as well as the present content. The two collections are available in the VO: the 1D spectra (summed in the case of integral field observations) and the 3D field observations. These latter products can be explored using the VO Paris Euro3D Client (http://voplus.obspm.fr/ chil/Euro3D).

  6. The NAS Computational Aerosciences Archive

    NASA Technical Reports Server (NTRS)

    Miceli, Kristina D.; Globus, Al; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    performed by NAS staff members and collaborators. Visual results, which may be available in the form of images, animations, and/or visualization scripts, are generated by researchers with respect to a certain research project, depicting dataset features that were determined important by the investigating researcher. For example, script files for visualization systems (e.g. FAST, PLOT3D, AVS) are provided to create visualizations on the user's local workstation to elucidate the key points of the numerical study. Users can then interact with the data starting where the investigator left off. Datasets are intended to give researchers an opportunity to understand previous work, 'mine' solutions for new information (for example, have you ever read a paper thinking "I wonder what the helicity density looks like?"), compare new techniques with older results, collaborate with remote colleagues, and perform validation. Supporting meta-information associated with the research projects is also important to provide additional context for research projects. This may include information such as the software used in the simulation (e.g. grid generators, flow solvers, visualization). In addition to serving the CAS research community, the information archive will also be helpful to students, visualization system developers and researchers, and management. Students (of any age) can use the data to study fluid dynamics, compare results from different flow solvers, learn about meshing techniques, etc., leading to better informed individuals. For these users it is particularly important that visualization be integrated into dataset archives. Visualization researchers can use dataset archives to test algorithms and techniques, leading to better visualization systems, Management can use the data to figure what is really going on behind the viewgraphs. All users will benefit from fast, easy, and convenient access to CFD datasets. The CAS information archive hopes to serve as a useful resource to

  7. Using Web-Based Peer Benchmarking to Manage the Client-Based Project

    ERIC Educational Resources Information Center

    Raska, David; Keller, Eileen Weisenbach; Shaw, Doris

    2013-01-01

    The complexities of integrating client-based projects into marketing courses provide challenges for the instructor but produce richness of context and active learning for the student. This paper explains the integration of Web-based peer benchmarking as a means of improving student performance on client-based projects within a single semester in…

  8. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    SciTech Connect

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  9. A benchmark for fault tolerant flight control evaluation

    NASA Astrophysics Data System (ADS)

    Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.

    2013-12-01

    A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.

  10. Model Predictions to the 2005 Ultrasonic Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Joon; Song, Sung-Jin; Park, Joon-Soo

    2006-03-01

    The World Federation of NDE Centers (WFNDEC) has addressed the 2005 ultrasonic benchmark problems including linear scanning of the side drilled hole (SDH) specimen with oblique incidence with an emphasis on further study on SV-wave responses of the SDH versus angles around 60 degrees and responses of a circular crack. To solve these problems, we adopted the multi-Gaussian beam model as beam models and the Kirchhoff approximation and the separation of variables method as far-field scattering models. By integration of the beam and scattering models and the system efficiency factor obtained from the given reference experimental setups provided by Center for Nondestructive Evaluation into our ultrasonic measurement models, we predicted the responses of the SDH and the circular cracks (pill-box crack like flaws). This paper summarizes our models and predicted results for the 2005 ultrasonic benchmark problems.